text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 78–87, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics SITS: A Hierarchical Nonparametric Model using Speaker Identity for Topic Segmentation in Multiparty Conversations Viet-An Nguyen Department of Computer Science and UMIACS University of Maryland College Park, MD [email protected] Jordan Boyd-Graber iSchool and UMIACS University of Maryland College Park, MD [email protected] Philip Resnik Department of Linguistics and UMIACS University of Maryland College Park, MD [email protected] Abstract One of the key tasks for analyzing conversational data is segmenting it into coherent topic segments. However, most models of topic segmentation ignore the social aspect of conversations, focusing only on the words used. We introduce a hierarchical Bayesian nonparametric model, Speaker Identity for Topic Segmentation (SITS), that discovers (1) the topics used in a conversation, (2) how these topics are shared across conversations, (3) when these topics shift, and (4) a person-specific tendency to introduce new topics. We evaluate against current unsupervised segmentation models to show that including personspecific information improves segmentation performance on meeting corpora and on political debates. Moreover, we provide evidence that SITS captures an individual’s tendency to introduce new topics in political contexts, via analysis of the 2008 US presidential debates and the television program Crossfire. 1 Topic Segmentation as a Social Process Conversation, interactive discussion between two or more people, is one of the most essential and common forms of communication. Whether in an informal situation or in more formal settings such as a political debate or business meeting, a conversation is often not about just one thing: topics evolve and are replaced as the conversation unfolds. Discovering this hidden structure in conversations is a key problem for conversational assistants (Tur et al., 2010) and tools that summarize (Murray et al., 2005) and display (Ehlen et al., 2007) conversational data. Topic segmentation also can illuminate individuals’ agendas (Boydstun et al., 2011), patterns of agreement and disagreement (Hawes et al., 2009; Abbott et al., 2011), and relationships among conversational participants (Ireland et al., 2011). One of the most natural ways to capture conversational structure is topic segmentation (Reynar, 1998; Purver, 2011). Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. However, previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on content, ignoring the speakers. We argue that, because conversation is a social process, we can understand conversational phenomena better by explicitly modeling behaviors of conversational participants. In Section 2, we incorporate participant identity in a new model we call Speaker Identity for Topic Segmentation (SITS), which discovers topical structure in conversation while jointly incorporating a participantlevel social component. Specifically, we explicitly model an individual’s tendency to introduce a topic. After outlining inference in Section 3 and introducing data in Section 4, we use SITS to improve state-ofthe-art-topic segmentation and topic identification models in Section 5. In addition, in Section 6, we also show that the per-speaker model is able to discover individuals who shape and influence the course of a conversation. Finally, we discuss related work and conclude the paper in Section 7. 2 Modeling Multiparty Discussions Data Properties We are interested in turn-taking, multiparty discussion. This is a broad category, in78 cluding political debates, business meetings, and online chats. More formally, such datasets contain C conversations. A conversation c has Tc turns, each of which is a maximal uninterrupted utterance by one speaker.1 In each turn t ∈[1, Tc], a speaker ac,t utters N words {wc,t,n}. Each word is from a vocabulary of size V , and there are M distinct speakers. Modeling Approaches The key insight of topic segmentation is that segments evince lexical cohesion (Galley et al., 2003; Olney and Cai, 2005). Words within a segment will look more like their neighbors than other words. This insight has been used to tune supervised methods (Hsueh et al., 2006) and inspire unsupervised models of lexical cohesion using bags of words (Purver et al., 2006) and language models (Eisenstein and Barzilay, 2008). We too take the unsupervised statistical approach. It requires few resources and is applicable in many domains without extensive training. Like previous approaches, we consider each turn to be a bag of words generated from an admixture of topics. Topics—after the topic modeling literature (Blei and Lafferty, 2009)—are multinomial distributions over terms. These topics are part of a generative model posited to have produced a corpus. However, topic models alone cannot model the dynamics of a conversation. Topic models typically do not model the temporal dynamics of individual documents, and those that do (Wang et al., 2008; Gerrish and Blei, 2010) are designed for larger documents and are not applicable here because they assume that most topics appear in every time slice. Instead, we endow each turn with a binary latent variable lc,t, called the topic shift. This latent variable signifies whether the speaker changed the topic of the conversation. To capture the topic-controlling behavior of the speakers across different conversations, we further associate each speaker m with a latent topic shift tendency, πm. Informally, this variable is intended to capture the propensity of a speaker to effect a topic shift. Formally, it represents the probability that the speaker m will change the topic (distribution) of a conversation. We take a Bayesian nonparametric approach (M¨uller and Quintana, 2004). Unlike 1Note the distinction with phonetic utterances, which by definition are bounded by silence. parametric models, which a priori fix the number of topics, nonparametric models use a flexible number of topics to better represent data. Nonparametric distributions such as the Dirichlet process (Ferguson, 1973) share statistical strength among conversations using a hierarchical model, such as the hierarchical Dirichlet process (HDP) (Teh et al., 2006). 2.1 Generative Process In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure 1a). As in the hierarchical Dirichlet process (Teh et al., 2006), we allow an unbounded number of topics to be shared among the turns of the corpus. Topics are drawn from a base distribution H over multinomial distributions over the vocabulary, a finite Dirichlet with symmetric prior λ. Unlike the HDP, where every document (here, every turn) draws a new multinomial distribution from a Dirichlet process, the social and temporal dynamics of a conversation, as specified by the binary topic shift indicator lc,t, determine when new draws happen. The full generative process is as follows: 1. For speaker m ∈[1, M], draw speaker shift probability πm ∼Beta(γ) 2. Draw global probability measure G0 ∼DP(α, H) 3. For each conversation c ∈[1, C] (a) Draw conversation distribution Gc ∼DP(α0, G0) (b) For each turn t ∈[1, Tc] with speaker ac,t i. If t = 1, set the topic shift lc,t = 1. Otherwise, draw lc,t ∼Bernoulli(πac,t). ii. If lc,t = 1, draw Gc,t ∼DP(αc, Gc). Otherwise, set Gc,t ≡Gc,t−1. iii. For each word index n ∈[1, Nc,t] • Draw ψc,t,n ∼Gc,t • Draw wc,t,n ∼Multinomial(ψc,t,n) The hierarchy of Dirichlet processes allows statistical strength to be shared across contexts; within a conversation and across conversations. The perspeaker topic shift tendency πm allows speaker identity to influence the evolution of topics. To make notation concrete and aligned with the topic segmentation, we introduce notation for segments in a conversation. A segment s of conversation c is a sequence of turns [τ, τ ′] such that lc,τ = lc,τ ′+1 = 1 and lc,t = 0, ∀t ∈(τ, τ ′]. When lc,t = 0, Gc,t is the same as Gc,t−1 and all topics (i.e. multinomial distributions over words) {ψc,t,n} that generate words in turn t and the topics {ψc,t−1,n} that generate words in turn t −1 come from the same 79 πm γ ac,2 ac,Tc wc,1,n wc,2,n wc,Tc,n ψc,1,n ψc,2,n ψc,Tc,n Gc,1 Gc,2 Gc,Tc αc lc,2 lc,Tc Gc α0 G0 α H C M Nc,1 Nc,2 Nc,Tc (a) φk β α πm γ ac,2 ac,Tc wc,1,n zc,1,n θc,1 wc,2,n zc,2,n θc,2 lc,2 wc,Tc,n zc,Tc,n θc,Tc lc,Tc C K M Nc,1 Nc,2 Nc,Tc (b) Figure 1: Graphical model representations of our proposed models: (a) the nonparametric version; (b) the parametric version. Nodes represent random variables (shaded ones are observed), lines are probabilistic dependencies. Plates represent repetition. The innermost plates are turns, grouped in conversations. distribution. Thus all topics used in a segment s are drawn from a single distribution, Gc,s, Gc,s | lc,1, lc,2, · · · , lc,Tc, αc, Gc ∼DP(αc, Gc) (1) For notational convenience, Sc denotes the number of segments in conversation c, and st denotes the segment index of turn t. We emphasize that all segment-related notations are derived from the posterior over the topic shifts l and not part of the model itself. Parametric Version SITS is a generalization of a parametric model (Figure 1b) where each turn has a multinomial distribution over K topics. In the parametric case, the number of topics K is fixed. Each topic, as before, is a multinomial distribution φ1 . . . φK. In the parametric case, each turn t in conversation c has an explicit multinomial distribution over K topics θc,t, identical for turns within a segment. A new topic distribution θ is drawn from a Dirichlet distribution parameterized by α when the topic shift indicator l is 1. The parametric version does not share strength within or across conversations, unlike SITS. When applied on a single conversation without speaker identity (all speakers are identical) it is equivalent to (Purver et al., 2006). In our experiments (Section 5), we compare against both. 3 Inference To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010). The state space is latent variables for topic indices assigned to all tokens z = {zc,t,n} and topic shifts assigned to turns l = {lc,t}. We marginalize over all other latent variables. Here, we only present the conditional sampling equations; for more details, see our supplement.2 3.1 Sampling Topic Assignments To sample zc,t,n, the index of the shared topic assigned to token n of turn t in conversation c, we need to sample the path assigning each word token to a segment-specific topic, each segment-specific topic to a conversational topic and each conversational topic to a shared topic. For efficiency, we make use of the minimal path assumption (Wallach, 2008) to generate these assignments.3 Under the minimal path assumption, an observation is assumed to have been generated by using a new distribution if and only if there is no existing distribution with the same value. 2 http://www.cs.umd.edu/∼vietan/topicshift/appendix.pdf 3We also investigated using the maximal assumption and fully sampling assignments. We found the minimal path assumption worked as well as explicitly sampling seating assignments and that the maximal path assumption worked less well. 80 We use Nc,s,k to denote the number of tokens in segment s in conversation c assigned topic k; Nc,k denotes the total number of segment-specific topics in conversation c assigned topic k and Nk denotes the number of conversational topics assigned topic k. TWk,w denotes the number of times the shared topic k is assigned to word w in the vocabulary. Marginal counts are represented with · and ∗represents all hyperparameters. The conditional distribution for zc,t,n is P(zc,t,n = k | wc,t,n = w, z−c,t,n, w−c,t,n, l, ∗) ∝ N −c,t,n c,st,k + αc N−c,t,n c,k +α0 N−c,t,n k + α K N−c,t,n · +α N−c,t,n c,· +α0 N −c,t,n c,st,· + αc ×        TW −c,t,n k,w + λ TW −c,t,n k,· + V λ, 1 V k new. (2) Here V is the size of the vocabulary, K is the current number of shared topics and the superscript −c,t,n denotes counts without considering wc,t,n. In Equation 2, the first factor is proportional to the probability of sampling a path according to the minimal path assumption; the second factor is proportional to the likelihood of observing w given the sampled topic. Since an uninformed prior is used, when a new topic is sampled, all tokens are equiprobable. 3.2 Sampling Topic Shifts Sampling the topic shift variable lc,t requires us to consider merging or splitting segments. We use kc,t to denote the shared topic indices of all tokens in turn t of conversation c; Sac,t,x to denote the number of times speaker ac,t is assigned the topic shift with value x ∈{0, 1}; Jx c,s to denote the number of topics in segment s of conversation c if lc,t = x and Nx c,s,j to denote the number of tokens assigned to the segment-specific topic j when lc,t = x.4 Again, the superscript −c,t is used to denote exclusion of turn t of conversation c in the corresponding counts. Recall that the topic shift is a binary variable. We use 0 to represent the case that the topic distribution is identical to the previous turn. We sample this assignment P(lc,t = 0 | l−c,t, w, k, a, ∗) ∝ S−c,t ac,t,0 + γ S−c,t ac,t,· + 2γ × α J0 c,st c QJ0 c,st j=1 (N 0 c,st,j −1)! QN0 c,st,· x=1 (x −1 + αc) . (3) 4Deterministically knowing the path assignments is the primary efficiency motivation for using the minimal path assumption. The alternative is to explicitly sample the path assignments, which is more complicated (for both notation and computation). This option is spelled in full detail in the supplementary material. In Equation 3, the first factor is proportional to the probability of assigning a topic shift of value 0 to speaker ac,t and the second factor is proportional to the joint probability of all topics in segment st of conversation c when lc,t = 0. The other alternative is for the topic shift to be 1, which represents the introduction of a new distribution over topics inside an existing segment. We sample this as P(lc,t = 1 | l−c,t, w, k, a, ∗) ∝ S−c,t ac,t,1 + γ S−c,t ac,t,· + 2γ ×  α J1 c,(st−1) c QJ1 c,(st−1) j=1 (N 1 c,(st−1),j −1)! QN1 c,(st−1),· x=1 (x −1 + αc) α J1 c,st c QJ1 c,st j=1 (N 1 c,stj −1)! QN1 c,st,· x=1 (x −1 + αc)  . (4) As above, the first factor in Equation 4 is proportional to the probability of assigning a topic shift of value 1 to speaker ac,t; the second factor in the big bracket is proportional to the joint distribution of the topics in segments st −1 and st. In this case lc,t = 1 means splitting the current segment, which results in two joint probabilities for two segments. 4 Datasets This section introduces the three corpora we use. We preprocess the data to remove stopwords and remove turns containing fewer than five tokens. The ICSI Meeting Corpus: The ICSI Meeting Corpus (Janin et al., 2003) is 75 transcribed meetings. For evaluation, we used a standard set of reference segmentations (Galley et al., 2003) of 25 meetings. Segmentations are binary, i.e., each point of the document is either a segment boundary or not, and on average each meeting has 8 segment boundaries. After preprocessing, there are 60 unique speakers and the vocabulary contains 3346 non-stopword tokens. The 2008 Presidential Election Debates Our second dataset contains three annotated presidential debates (Boydstun et al., 2011) between Barack Obama and John McCain and a vice presidential debate between Joe Biden and Sarah Palin. Each turn is one of two types: questions (Q) from the moderator or responses (R) from a candidate. Each clause in a turn is coded with a Question Topic (TQ) and a Response Topic (TR). Thus, a turn has a list of TQ’s and TR’s both of length equal to the number of clauses in the turn. Topics are from the Policy Agendas Topics 81 Speaker Type Turn clauses TQ TR Brokaw Q Sen. Obama, [. . . ] Are you saying [. . . ] that the American economy is going to get much worse before it gets better and they ought to be prepared for that? 1 N/A Obama R No, I am confident about the American economy. 1 1 [. . . ] But most importantly, we’re going to have to help ordinary families be able to stay in their homes, make sure that they can pay their bills [. . . ] 1 14 Brokaw Q Sen. McCain, in all candor, do you think the economy is going to get worse before it gets better? 1 N/A McCain R [. . .] I think if we act effectively, if we stabilize the housing market–which I believe we can, 1 14 if we go out and buy up these bad loans, so that people can have a new mortgage at the new value of their home 1 14 I think if we get rid of the cronyism and special interest influence in Washington so we can act more effectively. [. .. ] 1 20 Table 1: Example turns from the annotated 2008 election debates. The topics (TQ and TR) are from the Policy Agendas Topics Codebook which contains the following codes of topic: Macroeconomics (1), Housing & Community Development (14), Government Operations (20). Codebook, a manual inventory of 19 major topics and 225 subtopics.5 Table 1 shows an example annotation. To get reference segmentations, we assign each turn a real value from 0 to 1 indicating how much a turn changes the topic. For a question-typed turn, the score is the fraction of clause topics not appearing in the previous turn; for response-typed turns, the score is the fraction of clause topics that do not appear in the corresponding question. This results in a set of non-binary reference segmentations. For evaluation metrics that require binary segmentations, we create a binary segmentation by setting a turn as a segment boundary if the computed score is 1. This threshold is chosen to include only true segment boundaries. CNN’s Crossfire Crossfire was a weekly U.S. television “talking heads” program engineered to incite heated arguments (hence the name). Each episode features two recurring hosts, two guests, and clips from the week’s news. Our Crossfire dataset contains 1134 transcribed episodes aired between 2000 and 2004.6 There are 2567 unique speakers. Unlike the previous two datasets, Crossfire does not have explicit topic segmentations, so we use it to explore speaker-specific characteristics (Section 6). 5 Topic Segmentation Experiments In this section, we examine how well SITS can replicate annotations of when new topics are introduced. 5 http://www.policyagendas.org/page/topic-codebook 6 http://www.cs.umd.edu/∼vietan/topicshift/crossfire.zip We discuss metrics for evaluating an algorithm’s segmentation against a gold annotation, describe our experimental setup, and report those results. Evaluation Metrics To evaluate segmentations, we use Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002). Both metrics measure the probability that two points in a document will be incorrectly separated by a segment boundary. Both techniques consider all spans of length k in the document and count whether the two endpoints of the window are (im)properly segmented against the gold segmentation. However, these metrics have drawbacks. First, they require both hypothesized and reference segmentations to be binary. Many algorithms (e.g., probabilistic approaches) give non-binary segmentations where candidate boundaries have real-valued scores (e.g., probability or confidence). Thus, evaluation requires arbitrary thresholding to binarize soft scores. To be fair, thresholds are set so the number of segments are equal to a predefined value (Purver et al., 2006; Galley et al., 2003). To overcome these limitations, we also use Earth Mover’s Distance (EMD) (Rubner et al., 2000), a metric that measures the distance between two distributions. The EMD is the minimal cost to transform one distribution into the other. Each segmentation can be considered a multi-dimensional distribution where each candidate boundary is a dimension. In EMD, a distance function across features allows partial credit for “near miss” segment boundaries. In 82 addition, because EMD operates on distributions, we can compute the distance between non-binary hypothesized segmentations with binary or real-valued reference segmentations. We use the FastEMD implementation (Pele and Werman, 2009). Experimental Methods We applied the following methods to discover topic segmentations in a document: • TextTiling (Hearst, 1997) is one of the earliest generalpurpose topic segmentation algorithms, sliding a fixedwidth window to detect major changes in lexical similarity. • P-NoSpeaker-S: parametric version without speaker identity run on each conversation (Purver et al., 2006) • P-NoSpeaker-M: parametric version without speaker identity run on all conversations • P-SITS: the parametric version of SITS with speaker identity run on all conversations • NP-HMM: the HMM-based nonparametric model which a single topic per turn. This model can be considered a Sticky HDP-HMM (Fox et al., 2008) with speaker identity. • NP-SITS: the nonparametric version of SITS with speaker identity run on all conversations. Parameter Settings and Implementations In our experiment, all parameters of TextTiling are the same as in (Hearst, 1997). For statistical models, Gibbs sampling with 10 randomly initialized chains is used. Initial hyperparameter values are sampled from U(0, 1) to favor sparsity; statistics are collected after 500 burn-in iterations with a lag of 25 iterations over a total of 5000 iterations; and slice sampling (Neal, 2003) optimizes hyperparameters. Results and Analysis Table 2 shows the performance of various models on the topic segmentation problem, using the ICSI corpus and the 2008 debates. Consistent with previous results, probabilistic models outperform TextTiling. In addition, among the probabilistic models, the models that had access to speaker information consistently segment better than those lacking such information, supporting our assertion that there is benefit to modeling conversation as a social process. Furthermore, NP-SITS outperforms NP-HMM in both experiments, suggesting that using a distribution over topics to turns is better than using a single topic. This is consistent with parametric results reported in (Purver et al., 2006). The contribution of speaker identity seems more valuable in the debate setting. Debates are characterized by strong rewards for setting the agenda; dodging a question or moving the debate toward an opponent’s weakness can be useful strategies (Boydstun et al., 2011). In contrast, meetings (particularly lowstakes ICSI meetings) are characterized by pragmatic rather than strategic topic shifts. Second, agendasetting roles are clearer in formal debates; a moderator is tasked with setting the agenda and ensuring the conversation does not wander too much. The nonparametric model does best on the smaller debate dataset. We suspect that an evaluation that directly accessed the topic quality, either via prediction (Teh et al., 2006) or interpretability (Chang et al., 2009) would favor the nonparametric model more. 6 Evaluating Topic Shift Tendency In this section, we focus on the ability of SITS to capture speaker-level attributes. Recall that SITS associates with each speaker a topic shift tendency π that represents the probability of asserting a new topic in the conversation. While topic segmentation is a well studied problem, there are no established quantitative measurements of an individual’s ability to control a conversation. To evaluate whether the tendency is capturing meaningful characteristics of speakers, we compare our inferred tendencies against insights from political science. 2008 Elections To obtain a posterior estimate of π (Figure 3) we create 10 chains with hyperparameters sampled from the uniform distribution U(0, 1) and averaged π over 10 chains (as described in Section 5). In these debates, Ifill is the moderator of the debate between Biden and Palin; Brokaw, Lehrer and Schieffer are the three moderators of three debates between Obama and McCain. Here “Question” denotes questions from audiences in “town hall” debate. The role of this “speaker” can be considered equivalent to the debate moderator. The topic shift tendencies of moderators are much higher than for candidates. In the three debates between Obama and McCain, the moderators— Brokaw, Lehrer and Schieffer—have significantly higher scores than both candidates. This is a useful reality check, since in a debate the moderators are the ones asking questions and literally controlling the topical focus. Interestingly, in the vice-presidential debate, the score of moderator Ifill is only slightly higher than those of Palin and Biden; this is consistent with media commentary characterizing her as a 83 Model EMD Pk WindowDiff k = 5 10 15 k = 5 10 15 ICSI Dataset TextTiling 2.507 .289 .388 .451 .318 .477 .561 P-NoSpeaker-S 1.949 .222 .283 .342 .269 .393 .485 P-NoSpeaker-M 1.935 .207 .279 .335 .253 .371 .468 P-SITS 1.807 .211 .251 .289 .256 .363 .434 NP-HMM 2.189 .232 .257 .263 .267 .377 .444 NP-SITS 2.126 .228 .253 .259 .262 .372 .440 Debates Dataset TextTiling 2.821 .433 .548 .633 .534 .674 .760 P-NoSpeaker-S 2.822 .426 .543 .653 .482 .650 .756 P-NoSpeaker-M 2.712 .411 .522 .589 .479 .644 .745 P-SITS 2.269 .380 .405 .402 .482 .625 .719 NP-HMM 2.132 .362 .348 .323 .486 .629 .723 NP-SITS 1.813 .332 .269 .231 .470 .600 .692 Table 2: Results on the topic segmentation task. Lower is better. The parameter k is the window size of the metrics Pk and WindowDiff chosen to replicate previous results. 0 0.1 0.2 0.3 0.4 IFILL BIDEN PALIN OBAMA MCCAIN BROKAW LEHRER SCHIEFFER QUESTION Table 3: Topic shift tendency π of speakers in the 2008 Presidential Election Debates (larger means greater tendency) weak moderator.7 Similarly, the “Question” speaker had a relatively high variance, consistent with an amalgamation of many distinct speakers. These topic shift tendencies suggest that all candidates manage to succeed at some points in setting and controlling the debate topics. Our model gives Obama a slightly higher score than McCain, consistent with social science claims (Boydstun et al., 2011) that Obama had the lead in setting the agenda over McCain. Table 4 shows of SITS-detected topic shifts. Crossfire Crossfire, unlike the debates, has many speakers. This allows us to examine more closely what we can learn about speakers’ topic shift tendency. We verified that SITS can segment topics, and assuming that changing the topic is useful for a speaker, how can we characterize who does so effectively? We examine the relationship between topic shift tendency, social roles, and political ideology. To focus on frequent speakers, we filter out speakers with fewer than 30 turns. Most speakers have relatively small π, with the mode around 0.3. There are, however, speakers with very high topic shift tendencies. Table 5 shows the speakers having the highest values according to SITS. We find that there are three general patterns for who influences the course of a conversation in Crossfire. First, there are structural “speakers” the show uses to frame and propose new topics. These are 7 http://harpers.org/archive/2008/10/hbc-90003659 audience questions, news clips (e.g. many of Gore’s and Bush’s turns from 2000), and voice overs. That SITS is able to recover these is reassuring. Second, the stable of regular hosts receives high topic shift tendencies, which is reasonable given their experience with the format and ostensible moderation roles (in practice they also stoke lively discussion). The remaining class is more interesting. The remaining non-hosts with high topic shift tendency are relative moderates on the political spectrum: • John Kasich, one of few Republicans to support the assault weapons ban and now governor of Ohio, a swing state • Christine Todd Whitman, former Republican governor of New Jersey, a very Democratic state • John McCain, who before 2008 was known as a “maverick” for working with Democrats (e.g. Russ Feingold) This suggests that, despite Crossfire’s tendency to create highly partisan debates, those who are able to work across the political spectrum may best be able to influence the topic under discussion in highly polarized contexts. Table 4 shows detected topic shifts from these speakers; two of these examples (McCain and Whitman) show disagreement of Republicans with President Bush. In the other, Kasich is defending a Republican plan (school vouchers) popular with traditional Democratic constituencies. 7 Related and Future Work In the realm of statistical models, a number of techniques incorporate social connections and identity to explain content in social networks (Chang and Blei, 84 Previous turn Turn detected as shifting topic Debates Dataset PALIN: Your question to him was whether he supported gay marriage and my answer is the same as his and it is that I do not. IFILL: Wonderful. You agree. On that note, let’s move to foreign policy. You both have sons who are in Iraq or on their way to Iraq. You, Governor Palin, have said that you would like to see a real clear plan for an exit strategy. [.. .] MCCAIN: I think that Joe Biden is qualified in many respects. .. . SCHIEFFER: [.. . ] Let’s talk about energy and climate control. Every president since Nixon has said what both of you [. . .] IFILL: So, Governor, as vice president, there’s nothing that you have promised [. . .] that you wouldn’t take off the table because of this financial crisis we’re in? BIDEN: Again, let me–let’s talk about those tax breaks. [Obama] voted for an energy bill because, for the first time, it had real support for alternative energy. [. . .] on eliminating the tax breaks for the oil companies, Barack Obama voted to eliminate them. [. . .] Crossfire Dataset PRESS: But what do you say, governor, to Governor Bush and [. .. ] your party who would let politicians and not medical scientists decide what drugs are distributed [. .. ] WHITMAN: Well I disagree with them on this particular issues [.. . ] that’s important to me that George Bush stands for education of our children [. .. ] I care about tax policy, I care about the environment. I care about all the issues where he has a proven record in Texas [. .. ] WEXLER: [.. . ] They need a Medicare prescription drug plan [. .. ] Talk about schools, [. .. ] Al Gore has got a real plan. George Bush offers us vouchers. Talk about the environment. [. .. ] Al Gore is right on in terms of the majority of Americans, but George Bush [. .. ] KASICH: [.. . ] I want to talk about choice. [.. . ] George Bush believes that, if schools fail, parents ought to have a right to get their kids out of those schools and give them a chance and an opportunity for success. Gore says “no way” [.. .] Social Security. George Bush says [. .. ] direct it the way federal employees do [. . .] Al Gore says “No way” [. . .] That’s real choice. That’s real bottom-up, not a bureaucratic approach, the way we run this country. PRESS: Senator, Senator Breaux mentioned that it’s President Bush’s aim to start on education [. .. ] [McCain] [. . .] said he was going to do introduce the legislation the first day of the first week of the new administration. [.. . ] MCCAIN: After one of closest elections in our nation’s history, there is one thing the American people are unanimous about They want their government back. We can do that by ridding politics of large, unregulated contributions that give special interests a seat at the table while average Americans are stuck in the back of the room. Table 4: Example of turns designated as a topic shift by SITS. Turns were chosen with speakers to give examples of those with high topic shift tendency π. Rank Speaker π Rank Speaker π 1 Announcer .884 10 Kasich .570 2 Male .876 11 Carville† .550 3 Question .755 12 Carlson† .550 4 G. W. Bush‡ .751 13 Begala† .545 5 Press† .651 14 Whitman .533 6 Female .650 15 McAuliffe .529 7 Gore‡ .650 16 Matalin† .527 8 Narrator .642 17 McCain .524 9 Novak† .587 18 Fleischer .522 Table 5: Top speakers by topic shift tendencies. We mark hosts (†) and “speakers” who often (but not always) appeared in clips (‡). Apart from those groups, speakers with the highest tendency were political moderates. 2009) and scientific corpora (Rosen-Zvi et al., 2004). However, these models ignore the temporal evolution of content, treating documents as static. Models that do investigate the evolution of topics over time typically ignore the identify of the speaker. For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005; Wallach, 2006; Gruber et al., 2007; Ahmed and Xing, 2008; Boyd-Graber and Blei, 2008; Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et al., 2010). In contrast, SITS jointly models topic and individuals’ tendency to control a conversation. Not only does SITS outperform other models using standard computational linguistics baselines, but it also proposes intriguing hypotheses for social scientists. Associating each speaker with a scalar that models their tendency to change the topic does improve performance on standard tasks, but it’s inadequate to fully describe an individual. Modeling individuals’ perspective (Paul and Girju, 2010), “side” (Thomas et al., 2006), or personal preferences for topics (Grimmer, 2009) would enrich the model and better illuminate the interaction of influence and topic. Statistical analysis of political discourse can help discover patterns that political scientists, who often work via a “close reading,” might otherwise miss. We plan to work with social scientists to validate our implicit hypothesis that our topic shift tendency correlates well with intuitive measures of “influence.” 85 Acknowledgements This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. Jordan Boyd-Graber and Philip Resnik are also supported by US National Science Foundation Grant NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. References [Abbott et al., 2011] Abbott, R., Walker, M., Anand, P., Fox Tree, J. E., Bowmani, R., and King, J. (2011). How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 2–11. [Ahmed and Xing, 2008] Ahmed, A. and Xing, E. P. (2008). Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, pages 219– 230. [Beeferman et al., 1999] Beeferman, D., Berger, A., and Lafferty, J. (1999). Statistical models for text segmentation. Mach. Learn., 34:177–210. [Blei and Lafferty, 2009] Blei, D. M. and Lafferty, J. (2009). Text Mining: Theory and Applications, chapter Topic Models. Taylor and Francis, London. [Boyd-Graber and Blei, 2008] Boyd-Graber, J. and Blei, D. M. (2008). Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. [Boydstun et al., 2011] Boydstun, A. E., Phillips, C., and Glazier, R. A. (2011). It’s the economy again, stupid: Agenda control in the 2008 presidential debates. Forthcoming. [Chang and Blei, 2009] Chang, J. and Blei, D. M. (2009). Relational topic models for document networks. In Proceedings of Artificial Intelligence and Statistics. [Chang et al., 2009] Chang, J., Boyd-Graber, J., Wang, C., Gerrish, S., and Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. [Du et al., 2010] Du, L., Buntine, W., and Jin, H. (2010). Sequential latent dirichlet allocation: Discover underlying topic structures within a document. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 148 –157. [Ehlen et al., 2007] Ehlen, P., Purver, M., and Niekrasz, J. (2007). A meeting browser that learns. In In: Proceedings of the AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. [Eisenstein and Barzilay, 2008] Eisenstein, J. and Barzilay, R. (2008). Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings of Emperical Methods in Natural Language Processing. [Eisenstein et al., 2010] Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In EMNLP’10, pages 1277–1287. [Ferguson, 1973] Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. [Fox et al., 2008] Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An hdp-hmm for systems with state persistence. In Proceedings of International Conference of Machine Learning. [Galley et al., 2003] Galley, M., McKeown, K., FoslerLussier, E., and Jing, H. (2003). Discourse segmentation of multi-party conversation. In Proceedings of the Association for Computational Linguistics. [Georgescul et al., 2006] Georgescul, M., Clark, A., and Armstrong, S. (2006). Word distributions for thematic segmentation in a support vector machine approach. In Conference on Computational Natural Language Learning. [Gerrish and Blei, 2010] Gerrish, S. and Blei, D. M. (2010). A language-based approach to measuring scholarly impact. In Proceedings of International Conference of Machine Learning. [Griffiths et al., 2005] Griffiths, T. L., Steyvers, M., Blei, D. M., and Tenenbaum, J. B. (2005). Integrating topics and syntax. In Proceedings of Advances in Neural Information Processing Systems. [Grimmer, 2009] Grimmer, J. (2009). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18:1–35. [Gruber et al., 2007] Gruber, A., Rosen-Zvi, M., and Weiss, Y. (2007). Hidden topic Markov models. In Artificial Intelligence and Statistics. [Hawes et al., 2009] Hawes, T., Lin, J., and Resnik, P. (2009). Elements of a computational model for multiparty discourse: The turn-taking behavior of Supreme Court justices. Journal of the American Society for Information Science and Technology, 60(8):1607–1615. [Hearst, 1997] Hearst, M. A. (1997). TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. 86 [Hsueh et al., 2006] Hsueh, P.-y., Moore, J. D., and Renals, S. (2006). Automatic segmentation of multiparty dialogue. In Proceedings of the European Chapter of the Association for Computational Linguistics. [Ireland et al., 2011] Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., and Pennebaker, J. W. (2011). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1):39–44. [Janin et al., 2003] Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. (2003). The ICSI meeting corpus. In IEEE International Conference on Acoustics, Speech, and Signal Processing. [Johnson, 2010] Johnson, M. (2010). PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics. [Morris and Hirst, 1991] Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. [M¨uller and Quintana, 2004] M¨uller, P. and Quintana, F. A. (2004). Nonparametric Bayesian data analysis. Statistical Science, 19(1):95–110. [Murray et al., 2005] Murray, G., Renals, S., and Carletta, J. (2005). Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. [Neal, 2000] Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. [Neal, 2003] Neal, R. M. (2003). Slice sampling. Annals of Statistics, 31:705–767. [Olney and Cai, 2005] Olney, A. and Cai, Z. (2005). An orthonormal basis for topic segmentation in tutorial dialogue. In Proceedings of the Human Language Technology Conference. [Paul and Girju, 2010] Paul, M. and Girju, R. (2010). A two-dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. [Pele and Werman, 2009] Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In International Conference on Computer Vision. [Pevzner and Hearst, 2002] Pevzner, L. and Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28. [Purver, 2011] Purver, M. (2011). Topic segmentation. In Tur, G. and de Mori, R., editors, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 291–317. Wiley. [Purver et al., 2006] Purver, M., K¨ording, K., Griffiths, T. L., and Tenenbaum, J. (2006). Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. [Resnik and Hardisty, 2010] Resnik, P. and Hardisty, E. (2010). Gibbs sampling for the uninitiated. Technical Report UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. [Reynar, 1998] Reynar, J. C. (1998). Topic Segmentation: Algorithms and Applications. PhD thesis, University of Pennsylvania. [Rosen-Zvi et al., 2004] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. [Rubner et al., 2000] Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40:99–121. [Teh et al., 2006] Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. [Thomas et al., 2006] Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of Emperical Methods in Natural Language Processing. [Tur et al., 2010] Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-T¨ur, D., Dowding, J., Favre, B., Fern´andez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Tien, J., Vergyri, D., and Yang, F. (2010). The CALO meeting assistant system. Trans. Audio, Speech and Lang. Proc., 18:1601–1611. [Wallach, 2006] Wallach, H. M. (2006). Topic modeling: Beyond bag-of-words. In Proceedings of International Conference of Machine Learning. [Wallach, 2008] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [Wang et al., 2008] Wang, C., Blei, D. M., and Heckerman, D. (2008). Continuous time dynamic topic models. In Proceedings of Uncertainty in Artificial Intelligence. [Wang and McCallum, 2006] Wang, X. and McCallum, A. (2006). Topics over time: a non-Markov continuoustime model of topical trends. In Knowledge Discovery and Data Mining, Knowledge Discovery and Data Mining. 87
2012
9
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 854–863, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics Classifying French Verbs Using French and English Lexical Resources Ingrid Falk Universit´e de Lorraine/LORIA, Nancy, France [email protected] Claire Gardent CNRS/LORIA, Nancy, France [email protected] Jean-Charles Lamirel Universit´e de Strasbourg/LORIA, Nancy, France [email protected] Abstract We present a novel approach to the automatic acquisition of a Verbnet like classification of French verbs which involves the use (i) of a neural clustering method which associates clusters with features, (ii) of several supervised and unsupervised evaluation metrics and (iii) of various existing syntactic and semantic lexical resources. We evaluate our approach on an established test set and show that it outperforms previous related work with an Fmeasure of 0.70. 1 Introduction Verb classifications have been shown to be useful both from a theoretical and from a practical perspective. From the theoretical viewpoint, they permit capturing syntactic and/or semantic generalisations about verbs (Levin, 1993; Kipper Schuler, 2006). From a practical perspective, they support factorisation and have been shown to be effective in various NLP (Natural language Processing) tasks such as semantic role labelling (Swier and Stevenson, 2005) or word sense disambiguation (Dang, 2004). While there has been much work on automatically acquiring verb classes for English (Sun et al., 2010) and to a lesser extent for German (Brew and Schulte im Walde, 2002; Schulte im Walde, 2003; Schulte im Walde, 2006), Japanese (Oishi and Matsumoto, 1997) and Italian (Merlo et al., 2002), few studies have been conducted on the automatic classification of French verbs. Recently however, two proposals have been put forward. On the one hand, (Sun et al., 2010) applied a clustering approach developed for English to French. They exploit features extracted from a large scale subcategorisation lexicon (LexSchem (Messiant, 2008)) acquired fully automatically from Le Monde newspaper corpus and show that, as for English, syntactic frames and verb selectional preferences perform better than lexical cooccurence features. Their approach achieves a F-measure of 55.1 on 116 verbs occurring at least 150 times in Lexschem. The best performance is achieved when restricting the approach to verbs occurring at least 4000 times (43 verbs) with an F-measure of 65.4. On the other hand, Falk and Gardent (2011) present a classification approach for French verbs based on the use of Formal Concept Analysis (FCA). FCA (Barbut and Monjardet, 1970) is a symbolic classification technique which permits creating classes associating sets of objects (eg. French verbs) with sets of features (eg. syntactic frames). Falk and Gardent (2011) provide no evaluation for their results however, only a qualitative analysis. In this paper, we describe a novel approach to the clustering of French verbs which (i) gives good results on the established benchmark used in (Sun et al., 2010) and (ii) associates verbs with a feature profile describing their syntactic and semantic properties. The approach exploits a clustering method called IGNGF (Incremental Growing Neural Gas with Feature Maximisation, (Lamirel et al., 2011b)) which uses the features characterising each cluster both to guide the clustering process and to label the output clusters. We apply this method to the data contained in various verb lexicons and we evalu854 ate the resulting classification on a slightly modified version of the gold standard provided by (Sun et al., 2010). We show that the approach yields promising results (F-measure of 70%) and that the clustering produced systematically associates verbs with syntactic frames and thematic grids thereby providing an interesting basis for the creation and evaluation of a Verbnet-like classification. Section 2 describes the lexical resources used for feature extraction and Section 3 the experimental setup. Sections 4 and 5 present the data used for and the results obtained. Section 6 concludes. 2 Lexical Resources Used Our aim is to accquire a classification which covers the core verbs of French, could be used to support semantic role labelling and is similar in spirit to the English Verbnet. In this first experiment, we therefore favoured extracting the features used for clustering, not from a large corpus parsed automatically, but from manually validated resources1. These lexical resources are (i) a syntactic lexicon produced by merging three existing lexicons for French and (ii) the English Verbnet. Among the many syntactic lexicons available for French (Nicolas et al., 2008; Messiant, 2008; Kup´s´c and Abeill´e, 2008; van den Eynde and Mertens, 2003; Gross, 1975), we selected and merged three lexicons built or validated manually namely, Dicovalence, TreeLex and the LADL tables. The resulting lexicon contains 5918 verbs, 20433 lexical entries (i.e., verb/frame pairs) and 345 subcategorisation frames. It also contains more detailed syntactic and semantic features such as lexical preferences (e.g., locative argument, concrete object) or thematic role information (e.g., symmetric arguments, asset role) which we make use of for clustering. We use the English Verbnet as a resource for associating French verbs with thematic grids as follows. We translate the verbs in the English Verbnet classes to French using English-French dictionaries2. To 1Of course, the same approach could be applied to corpus based data (as done e.g., in (Sun et al., 2010)) thus making the approach fully unsupervised and directly applicable to any language for which a parser is available. 2For the translation we use the following resources: SciFran-Euradic, a French-English bilingual dictionary, built and improved by linguists (http://catalog.elra.info/ deal with polysemy, we train a supervised classifier as follows. We first map French verbs with English Verbnet classes: A French verb is associated with an English Verbnet class if, according to our dictionaries, it is a translation of an English verb in this class. The task of the classifier is then to produce a probability estimate for the correctness of this association, given the training data. The training set is built by stating for 1740 ⟨French verb, English Verbnet class⟩pairs whether the verb has the thematic grid given by the pair’s Verbnet class3. This set is used to train an SVM (support vector machine) classifier4. The features we use are similar to those used in (Mouton, 2010): they are numeric and are derived for example from the number of translations an English or French verb had, the size of the Verbnet classes, the number of classes a verb is a member of etc. The resulting classifier gives for each ⟨French verb, English VN class⟩pair the estimated probability of the pair’s verb being a member of the pair’s class5. We select 6000 pairs with highest probability estimates and obtain the translated classes by assigning each verb in a selected pair to the pair’s class. This way French verbs are effectively associated with one or more English Verbnet thematic grids. 3 Clustering Methods, Evaluation Metrics and Experimental Setup 3.1 Clustering Methods The IGNGF clustering method is an incremental neural “winner-take-most” clustering method belonging to the family of the free topology neural clustering methods. Like other neural free topology methods such as Neural Gas (NG) (Martinetz and Schulten, 1991), Growing Neural Gas (GNG) (Fritzke, 1995), or Incremental Growing Neural Gas (IGNG) (Prudent and Ennaji, 2005), the IGNGF method makes use of Hebbian learning product_info.php?products_id=666), Google dictionary (http://www.google.com/dictionary) and Dicovalence (van den Eynde and Mertens, 2003). 3The training data consists of the verbs and Verbnet classes used in the gold standard presented in (Sun et al., 2010). 4We used the libsvm (Chang and Lin, 2011) implementation of the classifier for this step. 5The accuracy of the classifier on the held out random test set of 100 pairs was of 90%. 855 (Hebb, 1949) for dynamically structuring the learning space. However, contrary to these methods, the use of a standard distance measure for determining a winner is replaced in IGNGF by feature maximisation. Feature maximisation is a cluster quality metric which associates each cluster with maximal features i.e., features whose Feature F-measure is maximal. Feature F-measure is the harmonic mean of Feature Recall and Feature Precision which in turn are defined as: FRc(f) = P v∈c W f v P c′∈C P v∈c′W f v , FPc(f) = P v∈c W f v P f′∈Fc,v∈c W f′ v where W f x represents the weight of the feature f for element x and Fc designates the set of features associated with the verbs occuring in the cluster c. A feature is then said to be maximal for a given cluster iff its Feature F-measure is higher for that cluster than for any other cluster. The IGNGF method was shown to outperform other usual neural and non neural methods for clustering tasks on relatively clean data (Lamirel et al., 2011b). Since we use features extracted from manually validated sources, this clustering technique seems a good fit for our application. In addition, the feature maximisation and cluster labeling performed by the IGNGF method has proved promising both for visualising clustering results (Lamirel et al., 2008) and for validating or optimising a clustering method (Attik et al., 2006). We make use of these processes in all our experiments and systematically compute cluster labelling and feature maximisation on the output clusterings. As we shall see, this permits distinguishing between clusterings with similar F-measure but lower “linguistic plausibility” (cf. Section 5). This facilitates clustering interpretation in that cluster labeling clearly indicates the association between clusters (verbs) and their prevalent features. And this supports the creation of a Verbnet style classification in that cluster labeling directly provides classes grouping together verbs, thematic grids and subcategorisation frames. 3.2 Evaluation metrics We use several evaluation metrics which bear on different properties of the clustering. Modified Purity and Accuracy. Following (Sun et al., 2010), we use modified purity (mPUR); weighted class accuracy (ACC) and F-measure to evaluate the clusterings produced. These are computed as follows. Each induced cluster is assigned the gold class (its prevalent class, prev(C)) to which most of its member verbs belong. A verb is then said to be correct if the gold associates it with the prevalent class of the cluster it is in. Given this, purity is the ratio between the number of correct gold verbs in the clustering and the total number of gold verbs in the clustering6: mPUR = P C∈Clustering,|prev(C)|>1 |prev(C) ∩C| VerbsGold∩Clustering , where VerbsGold∩Clustering is the total number of gold verbs in the clustering. Accuracy represents the proportion of gold verbs in those clusters which are associated with a gold class, compared to all the gold verbs in the clustering. To compute accuracy we associate to each gold class CGold a dominant cluster, ie. the cluster dom(CGold) which has most verbs in common with the gold class. Then accuracy is given by the following formula: ACC = P C∈Gold |dom(C) ∩C| VerbsGold∩Clustering Finally, F-measure is the harmonic mean of mPUR and ACC. Coverage. To assess the extent to which a clustering matches the gold classification, we additionally compute the coverage of each clustering that is, the proportion of gold classes that are prevalent classes in the clustering. Cumulative Micro Precision (CMP). As pointed out in (Lamirel et al., 2008; Attik et al., 2006), unsupervised evaluation metrics based on cluster labelling and feature maximisation can prove very useful for identifying the best clustering strategy. Following (Lamirel et al., 2011a), we use CMP to identify the best clustering. Computed on the clustering results, this metrics evaluates the quality of a clustering w.r.t. the cluster features rather than w.r.t. 6Clusters for which the prevalent class has only one element are ignored 856 to a gold standard. It was shown in (Ghribi et al., 2010) to be effective in detecting degenerated clustering results including a small number of large heterogeneous, “garbage” clusters and a big number of small size “chunk” clusters. First, the local Recall (Rf c ) and the local Precision (P f c ) of a feature f in a cluster c are defined as follows: Rf c = |vf c | |V f| P f c = |vf c | |Vc| where vf c is the set of verbs having feature f in c, Vc the set of verbs in c and V f, the set of verbs with feature f. Cumulative Micro-Precision (CMP) is then defined as follows: CMP = P i=|Cinf|,|Csup| 1 |Ci+|2 P c∈Ci+,f∈Fc P f c P i=|Cinf|,|Csup| 1 Ci+ where Ci+ represents the subset of clusters of C for which the number of associated verbs is greater than i, and: Cinf = argminci∈C|ci|, Csup = argmaxci∈C|ci| 3.3 Cluster display, feature f-Measure and confidence score To facilitate interpretation, clusters are displayed as illustrated in Table 1. Features are displayed in decreasing order of Feature F-measure (cf. Section 3.1) and features whose Feature F-measure is under the average Feature F-measure of the overall clustering are clearly delineated from others. In addition, for each verb in a cluster, a confidence score is displayed which is the ratio between the sum of the F-measures of its cluster maximised features over the sum of the F-measures of the overall cluster maximised features. Verbs whose confidence score is 0 are considered as orphan data. 3.4 Experimental setup We applied an IDF-Norm weighting scheme (Robertson and Jones, 1976) to decrease the influence of the most frequent features (IDF component) and to compensate for discrepancies in feature number (normalisation). C6- 14(14) [197(197)] ———Prevalent Label — = AgExp-Cause 0.341100 G-AgExp-Cause 0.274864 C-SUJ:Ssub,OBJ:NP 0.061313 C-SUJ:Ssub 0.042544 C-SUJ:NP,DEOBJ:Ssub ********** ********** 0.017787 C-SUJ:NP,DEOBJ:VPinf 0.008108 C-SUJ:VPinf,AOBJ:PP . . . [**d´eprimer 0.934345 4(0)] [affliger 0.879122 3(0)] [´eblouir 0.879122 3(0)] [choquer 0.879122 3(0)] [d´ecevoir 0.879122 3(0)] [d´econtenancer 0.879122 3(0)] [d´econtracter 0.879122 3(0)] [d´esillusionner 0.879122 3(0)] [**ennuyer 0.879122 3(0)] [fasciner 0.879122 3(0)] [**heurter 0.879122 3(0)] . . . Table 1: Sample output for a cluster produced with the grid-scf-sem feature set and the IGNGF clustering method. We use K-Means as a baseline. For each clustering method (K-Means and IGNGF), we let the number of clusters vary between 1 and 30 to obtain a partition that reaches an optimum F-measure and a number of clusters that is in the same order of magnitude as the initial number of Gold classes (i.e. 11 classes). 4 Features and Data Features In the simplest case the features are the subcategorisation frames (scf) associated to the verbs by our lexicon. We also experiment with different combinations of additional, syntactic (synt) and semantic features (sem) extracted from the lexicon and with the thematic grids (grid) extracted from the English Verbnet. The thematic grid information is derived from the English Verbnet as explained in Section 2. The syntactic features extracted from the lexicon are listed in Table 1(a). They indicate whether a verb accepts symmetric arguments (e.g., John met Mary/John and Mary met); has four or more arguments; combines with a predicative phrase (e.g., John named Mary president); takes a sentential complement or an optional object; or accepts the passive in se (similar to the English middle voice Les habits se vendent bien / The clothes sell well). As shown in Table 1(a), these 857 (a) Additional syntactic features. Feature related VN class Symmetric arguments amalgamate-22.2, correspond-36.1 4 or more arguments get-13.5.1, send-11.1 Predicate characterize-29.2 Sentential argument correspond-36.1, characterize-29.2 Optional object implicit theme (Randall, 2010), p. 95 Passive built with se theme role (Randall, 2010), p. 120 (b) Additional semantic features. Feature related VN class Location role put-9.1, remove-10.1, . .. Concrete object hit-18.1 (eg. INSTRUMENT) (non human role) other cos-45.4 .. . Asset role get-13.5.1 Plural role amalgamate-22.2, correspond-36.1 Table 2: Additional syntactic (a) and semantic (b) features extracted from the LADL and Dicovalence resources and the alternations/roles they are possibly related to. features are meant to help identify specific Verbnet classes and thematic roles. Finally, we extract four semantic features from the lexicon. These indicate whether a verb takes a locative or an asset argument and whether it requires a concrete object (non human role) or a plural role. The potential correlation between these features and Verbnet classes is given in Table 1(b). French Gold Standard To evaluate our approach, we use the gold standard proposed by Sun et al. (2010). This resource consists of 16 fine grained Levin classes with 12 verbs each whose predominant sense in English belong to that class. Since our goal is to build a Verbnet like classification for French, we mapped the 16 Levin classes of the Sun et al. (2010)’s Gold Standard to 11 Verbnet classes thereby associating each class with a thematic grid. In addition we group Verbnet semantic roles as shown in Table 4. Table 3 shows the reference we use for evaluation. Verbs For our clustering experiments we use the 2183 French verbs occurring in the translations of the 11 classes in the gold standard (cf. Section 4). Since we ignore verbs with only one feature the number of verbs and ⟨verb, feature⟩pairs considered may vary slightly across experiments. AgExp Agent, Experiencer AgentSym Actor, Actor1, Actor2 Theme Theme, Topic, Stimulus, Proposition PredAtt Predicate, Attribute ThemeSym Theme, Theme1, Theme2 Patient Patient PatientSym Patient, Patient1, Patient2 Start Material (transformation), Source (motion, transfer) End Product (transformation), Destination (motion), Recipient (transfer) Location Instrument Cause Beneficiary Table 4: Verbnet role groups. 5 Results 5.1 Quantitative Analysis Table 4(a) includes the evaluation results for all the feature sets when using IGNGF clustering. In terms of F-measure, the results range from 0.61 to 0.70. This generally outperforms (Sun et al., 2010) whose best F-measures vary between 0.55 for verbs occurring at least 150 times in the training data and 0.65 for verbs occurring at least 4000 times in this training data. The results are not directly comparable however since the gold data is slightly different due to the grouping of Verbnet classes through their thematic grids. In terms of features, the best results are obtained using the grid-scf-sem feature set with an Fmeasure of 0.70. Moreover, for this data set, the unsupervised evaluation metrics (cf. Section 3) highlight strong cluster cohesion with a number of clusters close to the number of gold classes (13 clusters for 11 gold classes); a low number of orphan verbs (i.e., verbs whose confidence score is zero); and a high Cumulated Micro Precision (CMP = 0.3) indicating homogeneous clusters in terms of maximising features. The coverage of 0.72 indicates that approximately 8 out of the 11 gold classes could be matched to a prevalent label. That is, 8 clusters were labelled with a prevalent label corresponding to 8 distinct gold classes. In contrast, the classification obtained using the scf-synt-sem feature set has a higher CMP for the clustering with optimal mPUR (0.57); but a lower F-measure (0.61), a larger number of classes (16) 858 AgExp, PatientSym amalgamate-22.2: incorporer, associer, r´eunir, m´elanger, mˆeler, unir, assembler, combiner, lier, fusionner Cause, AgExp amuse-31.1: abattre, accabler, briser, d´eprimer, consterner, an´eantir, ´epuiser, ext´enuer, ´ecraser, ennuyer, ´ereinter, inonder AgExp, PredAtt, Theme characterize-29.2: appr´ehender, concevoir, consid´erer, d´ecrire, d´efinir, d´epeindre, d´esigner, envisager, identifier, montrer, percevoir, repr´esenter, ressentir AgentSym, Theme correspond-36.1: coop´erer, participer, collaborer, concourir, contribuer, associer AgExp, Beneficiary, Extent, Start, Theme get-13.5.1: acheter, prendre, saisir, r´eserver, conserver, garder, pr´eserver, maintenir, retenir, louer, affr´eter AgExp, Instrument, Patient hit-18.1: cogner, heurter, battre, frapper, fouetter, taper, rosser, brutaliser, ´ereinter, maltraiter, corriger other cos-45.4: m´elanger, fusionner, consolider, renforcer, fortifier, adoucir, polir, att´enuer, temp´erer, p´etrir, fac¸onner, former AgExp, Location, Theme light emission-43.1 briller, ´etinceler, flamboyer, luire, resplendir, p´etiller, rutiler, rayonner, scintiller modes of being with motion-47.3: trembler, fr´emir, osciller, vaciller, vibrer, tressaillir, frissonner, palpiter, gr´esiller, trembloter, palpiter run-51.3.2: voyager, aller, errer, circuler, courir, bouger, naviguer, passer, promener, d´eplacer AgExp, End, Theme manner speaking-37.3: rˆaler, gronder, crier, ronchonner, grogner, bougonner, maugr´eer, rousp´eter, grommeler, larmoyer, g´emir, geindre, hurler, gueuler, brailler, chuchoter put-9.1: accrocher, d´eposer, mettre, placer, r´epartir, r´eint´egrer, empiler, emporter, enfermer, ins´erer, installer say-37.7: dire, r´ev´eler, d´eclarer, signaler, indiquer, montrer, annoncer, r´epondre, affirmer, certifier, r´epliquer AgExp, Theme peer-30.3: regarder, ´ecouter, examiner, consid´erer, voir, scruter, d´evisager AgExp, Start, Theme remove-10.1: ˆoter, enlever, retirer, supprimer, retrancher, d´ebarasser, soustraire, d´ecompter, ´eliminer AgExp, End, Start, Theme send-11.1: envoyer, lancer, transmettre, adresser, porter, exp´edier, transporter, jeter, renvoyer, livrer Table 3: French gold classes and their member verbs presented in (Sun et al., 2010). and a higher number of orphans (156). That is, this clustering has many clusters with strong feature cohesion but a class structure that markedly differs from the gold. Since there might be differences in structure between the English Verbnet and the thematic classification for French we are building, this is not necessarily incorrect however. Further investigation on a larger data set would be required to assess which clustering is in fact better given the data used and the classification searched for. In general, data sets whose description includes semantic features (sem or grid) tend to produce better results than those that do not (scf or synt). This is in line with results from (Sun et al., 2010) which shows that semantic features help verb classification. It differs from it however in that the semantic features used by Sun et al. (2010) are selectional preferences while ours are thematic grids and a restricted set of manually encoded selectional preferences. Noticeably, the synt feature degrades performance throughout: grid,scf,synt has lower Fmeasure than grid,scf; scf,synt,sem than scf,sem; and scf,synt than scf. We have no clear explanation for this. The best results are obtained with IGNGF method on most of the data sets. Table 4(b) illustrates the differences between the results obtained with IGNGF and those obtained with K-means on the grid-scf-sem data set (best data set). Although Kmeans and IGNGF optimal model reach similar Fmeasure and display a similar number of clusters, the very low CMP (0.10) of the K-means model shows that, despite a good Gold class coverage (0.81), K-means tend to produce more heterogeneous clusters in terms of features. Table 4(b) also shows the impact of IDF feature weighting and feature vector normalisation on clustering. The benefit of preprocessing the data appears clearly. When neither IDF weighting nor vector normalisation are used, F-measure decreases from 0.70 to 0.68 and cumulative micro-precision from 0.30 to 0.21. When either normalisation or IDF weighting is left out, the cumulative micro-precision drops by up to 15 points (from 0.30 to 0.15 and 0.18) and the number of orphans increases from 67 up to 180. 859 (a) The impact of the feature set. Feat. set Nbr. feat. Nbr. verbs mPUR ACC F (Gold) Nbr. classes Cov. Nbr. orphans CMP at opt (13cl.) scf 220 2085 0.93 0.48 0.64 17 0.55 129 0.28 (0.27) grid, scf 231 2085 0.94 0.54 0.68 14 0.64 183 0.12 (0.12) grid, scf, sem 237 2183 0.86 0.59 0.70 13 0.72 67 0.30 (0.30) grid, scf, synt 236 2150 0.87 0.50 0.63 14 0.72 66 0.13 (0.14) grid, scf, synt, sem 242 2201 0.99 0.52 0.69 16 0.82 100 0.50 (0.22) scf, sem 226 2183 0.83 0.55 0.66 23 0.64 146 0.40 (0.26) scf, synt 225 2150 0.91 0.45 0.61 15 0.45 83 0.17 (0.22) scf, synt, sem 231 2101 0.89 0.47 0.61 16 0.64 156 0.57 (0.11) (b) Metrics for best performing clustering method (IGNGF) compared to K-means. Feature set is grid, scf, sem. Method mPUR ACC F (Gold) Nbr. classes Cov. Nbr. orphans CMP at opt (13cl.) IGNGF with IDF and norm. 0.86 0.59 0.70 13 0.72 67 0.30 (0.30) K-means with IDF and norm. 0.88 0.57 0.70 13 0.81 67 0.10 (0.10) IGNGF, no IDF 0.86 0.59 0.70 17 0.81 126 0.18 (0.14) IGNGF, no norm. 0.78 0.62 0.70 18 0.72 180 0.15 (0.11) IGNGF, no IDF, no norm. 0.87 0.55 0.68 14 0.81 103 0.21 (0.21) Table 5: Results. Cumulative micro precision (CMP) is given for the clustering at the mPUR optimum and in parantheses for 13 classes clustering. That is, clusters are less coherent in terms of features. 5.2 Qualitative Analysis We carried out a manual analysis of the clusters examining both the semantic coherence of each cluster (do the verbs in that cluster share a semantic component?) and the association between the thematic grids, the verbs and the syntactic frames provided by clustering. Semantic homogeneity: To assess semantic homogeneity, we examined each cluster and sought to identify one or more Verbnet labels characterising the verbs contained in that cluster. From the 13 clusters produced by clustering, 11 clusters could be labelled. Table 6 shows these eleven clusters, the associated labels (abbreviated Verbnet class names), some example verbs, a sample subcategorisation frame drawn from the cluster maximising features and an illustrating sentence. As can be seen, some clusters group together several subclasses and conversely, some Verbnet classes are spread over several clusters. This is not necessarily incorrect though. To start with, recall that we are aiming for a classification which groups together verbs with the same thematic grid. Given this, cluster C2 correctly groups together two Verbnet classes (other cos-45.4 and hit-18.1) which share the same thematic grid (cf. Table 3). In addition, the features associated with this cluster indicate that verbs in these two classes are transitive, select a concrete object, and can be pronominalised which again is correct for most verbs in that cluster. Similarly, cluster C11 groups together verbs from two Verbnet classes with identical theta grid (light emission-43.1 and modes of being with motion-47.3) while its associated features correctly indicate that verbs from both classes accept both the intransitive form without object (la jeune fille rayonne / the young girl glows, un cheval galope / a horse gallops) and with a prepositional object (la jeune fille rayonne de bonheur / the young girl glows with happiness, un cheval galope vers l’infini / a horse gallops to infinity). The third cluster grouping together verbs from two Verbnet classes is C7 which contains mainly judgement verbs (to applaud, bless, compliment, punish) but also some verbs from the (very large) other cos-45.4 class. In this case, a prevalent shared feature is that both types of verbs accept a de-object that is, a prepositional object introduced by ”de” (Jean applaudit Marie d’avoir dans´e / Jean applaudit Marie for having danced; Jean d´egage le sable de la route / Jean clears the sand of the road). The semantic features necessary to provide a finer grained analysis of their differences are lacking. Interestingly, clustering also highlights classes which are semantically homogeneous but syntactically distinct. While clusters C6 and C10 both 860 contain mostly verbs from the amuse-31.1 class (amuser,agacer,´enerver,d´eprimer), their features indicate that verbs in C10 accept the pronominal form (e.g., Jean s’amuse) while verbs in C6 do not (e.g., *Jean se d´eprime). In this case, clustering highlights a syntactic distinction which is present in French but not in English. In contrast, the dispersion of verbs from the other cos-45.4 class over clusters C2 and C7 has no obvious explanation. One reason might be that this class is rather large (361 verbs) and thus might contain French verbs that do not necessarily share properties with the original Verbnet class. Syntax and Semantics. We examined whether the prevalent syntactic features labelling each cluster were compatible with the verbs and with the semantic class(es) manually assigned to the clusters. Table 6 sketches the relation between cluster, syntactic frames and Verbnet like classes. It shows for instance that the prevalent frame of the C0 class (manner speaking-37.3) correctly indicates that verbs in that cluster subcategorise for a sentential argument and an AOBJ (prepositional object in “`a”) (e.g., Jean bafouille `a Marie qu’il est amoureux / Jean stammers to Mary that he is in love); and that verbs in the C9 class (characterize-29.2) subcategorise for an object NP and an attribute (Jean nomme Marie pr´esidente / Jean appoints Marie president). In general, we found that the prevalent frames associated with each cluster adequately characterise the syntax of that verb class. 6 Conclusion We presented an approach to the automatic classification of french verbs which showed good results on an established testset and associates verb clusters with syntactic and semantic features. Whether the features associated by the IGNGF clustering with the verb clusters appropriately caracterise these clusters remains an open question. We carried out a first evaluation using these features to label the syntactic arguments of verbs in a corpus with thematic roles and found that precision is high but recall low mainly because of polysemy: the frames and grids made available by the classification for a given verb are correct for that verb but not for the verb sense occurring in the corpus. This suggests that overlapping clustering techniques need to C0 speaking: babiller, bafouiller, balbutier SUJ:NP,OBJ:Ssub,AOBJ:PP Jean bafouille `a Marie qu’il l’aime / Jean stammers to Mary that he is in love C1 put: entasser, r´epandre, essaimer SUJ:NP,POBJ:PP,DUMMY:REFL Loc, Plural Les d´echets s’entassent dans la cour / Waste piles in the yard C2 hit: broyer, d´emolir, fouetter SUJ:NP,OBJ:NP T-Nhum Ces pierres broient les graines / These stones grind the seeds. other cos: agrandir, all´eger, amincir SUJ:NP,DUMMY:REFL les a´eroports s’agrandissent sans arrˆet / airports grow constantly C4 dedicate: s’engager `a, s’obliger `a, SUJ:NP,AOBJ:VPinf,DUMMY:REFL Cette promesse t’engage `a nous suivre / This promise commits you to following us C5 conjecture: penser, attester, agr´eer SUJ:NP,OBJ:Ssub Le m´edecin atteste que l’employ´e n’est pas en ´etat de travailler / The physician certifies that the employee is not able to work C6 amuse: d´eprimer, d´econtenancer, d´ecevoir SUJ:Ssub,OBJ:NP SUJ:NP,DEOBJ:Ssub Travailler d´eprime Marie / Working depresses Marie Marie d´eprime de ce que Jean parte / Marie depresses because of Jean’s leaving C7 other cos: d´egager, vider, drainer, sevrer judgement SUJ:NP,OBJ:NP,DEOBJ:PP vider le r´ecipient de son contenu / empty the container of its contents applaudir, b´enir, blˆamer, SUJ:NP,OBJ:NP,DEOBJ:Ssub Jean blame Marie d’avoir couru / Jean blames Mary for runnig C9 characterise: promouvoir, adouber, nommer SUJ:NP,OBJ:NP,ATB:XP Jean nomme Marie pr´esidente / Jean appoints Marie president C10 amuse: agacer, amuser, enorgueillir SUJ:NP,DEOBJ:XP,DUMMY:REFL Jean s’enorgueillit d’ˆetre roi/ Jean is proud to be king C11 light: rayonner,clignoter,cliqueter SUJ:NP,POBJ:PP Jean clignote des yeux / Jean twinkles his eyes motion: aller, passer, fuir, glisser SUJ:NP,POBJ:PP glisser sur le trottoir verglac´e / slip on the icy sidewalk C12 transfer msg: enseigner, permettre, interdire SUJ:NP,OBJ:NP,AOBJ:PP Jean enseigne l’anglais `a Marie / Jean teaches Marie English. Table 6: Relations between clusters, syntactic frames and Verbnet like classes. be applied. We are also investigating how the approach scales up to the full set of verbs present in the lexicon. Both Dicovalence and the LADL tables contain rich detailed information about the syntactic and semantic properties of French verbs. We intend to tap on that potential and explore how well the various semantic features that can be extracted from these resources support automatic verb classification for the full set of verbs present in our lexicon. 861 References M. Attik, S. Al Shehabi, and J.-C. Lamirel. 2006. Clustering Quality Measures for Data Samples with Multiple Labels. In Databases and Applications, pages 58–65. M. Barbut and B. Monjardet. 1970. Ordre et Classification. Hachette Universit´e. C. Brew and S. Schulte im Walde. 2002. Spectral Clustering for German Verbs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 117–124, Philadelphia, PA. C. Chang and C. Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27. Software available at http://www.csie.ntu.edu. tw/˜cjlin/libsvm. H. T. Dang. 2004. Investigations into the role of lexical semantics in word sense disambiguation. Ph.D. thesis, U. Pennsylvannia, US. I. Falk and C. Gardent. 2011. Combining Formal Concept Analysis and Translation to Assign Frames and Thematic Role Sets to French Verbs. In Amedeo Napoli and Vilem Vychodil, editors, Concept Lattices and Their Applications, Nancy, France, October. B. Fritzke. 1995. A growing neural gas network learns topologies. Advances in Neural Information Processing Systems 7, 7:625–632. M. Ghribi, P. Cuxac, J.-C. Lamirel, and A. Lelu. 2010. Mesures de qualit´e de clustering de documents : prise en compte de la distribution des mots cl´es. In Nicolas B´echet, editor, ´Evaluation des m´ethodes d’Extraction de Connaissances dans les Donn´ees- EvalECD’2010, pages 15–28, Hammamet, Tunisie, January. Fatiha Sa¨ıs. M. Gross. 1975. M´ethodes en syntaxe. Hermann, Paris. D. O. Hebb. 1949. The organization of behavior: a neuropsychological theory. John Wiley & Sons, New York. K. Kipper Schuler. 2006. VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon. Ph.D. thesis, University of Pennsylvania. A. Kup´s´c and A. Abeill´e. 2008. Growing treelex. In Alexander Gelbkuh, editor, Computational Linguistics and Intelligent Text Processing, volume 4919 of Lecture Notes in Computer Science, pages 28–39. Springer Berlin / Heidelberg. J.-C. Lamirel, A. Phuong Ta, and M. Attik. 2008. Novel Labeling Strategies for Hierarchical Representation of Multidimensional Data Analysis Results. In AIA IASTED, Innbruck, Autriche. J. C. Lamirel, P. Cuxac, and R. Mall. 2011a. A new efficient and unbiased approach for clustering quality evaluation. In QIMIE’11, PaKDD, Shenzen, China. J.-C. Lamirel, R. Mall, P. Cuxac, and G. Safi. 2011b. Variations to incremental growing neural gas algorithm based on label maximization. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 956 –965. B. Levin. 1993. English Verb Classes and Alternations: a preliminary investigation. University of Chicago Press, Chicago and London. T. Martinetz and K. Schulten. 1991. A ”Neural-Gas” Network Learns Topologies. Artificial Neural Networks, I:397–402. P. Merlo, S. Stevenson, V. Tsang, and G. Allaria. 2002. A multilingual paradigm for automatic verb classification. In ACL, pages 207–214. C. Messiant. 2008. A subcategorization acquisition system for French verbs. In Proceedings of the ACL08: HLT Student Research Workshop, pages 55–60, Columbus, Ohio, June. Association for Computational Linguistics. C. Mouton. 2010. Ressources et m´ethodes semisupervis´ees pour l’analyse s´emantique de textes en fran cais. Ph.D. thesis, Universit´e Paris 11 - Paris Sud UFR d’informatique. L. Nicolas, B. Sagot, ´E. de La Clergerie, and J. Farr´e. 2008. Computer aided correction and extension of a syntactic wide-coverage lexicon. In Proc. of CoLing 2008, Manchester, UK, August. A. Oishi and Y. Matsumoto. 1997. Detecting the organization of semantic subclasses of Japanese verbs. International Journal of Corpus Linguistics, 2(1):65–89, october. Y. Prudent and A. Ennaji. 2005. An incremental growing neural gas learns topologies. In Neural Networks, 2005. IJCNN ’05. Proceedings. 2005 IEEE International Joint Conference on, volume 2, pages 1211– 1216. J. H. Randall. 2010. Linking. Studies in Natural Language and Linguistic Theory. Springer, Dordrecht. S. E. Robertson and K. S. Jones. 1976. Relevance weighting of search terms. Journal of the American Society for Information Science, 27(3):129–146. S. Schulte im Walde. 2003. Experiments on the Automatic Induction of German Semantic Verb Classes. Ph.D. thesis, Institut f¨ur Maschinelle Sprachverarbeitung, Universit¨at Stuttgart. Published as AIMS Report 9(2). S. Schulte im Walde. 2006. Experiments on the automatic induction of german semantic verb classes. Computational Linguistics, 32(2):159–194. L. Sun, A. Korhonen, T. Poibeau, and C. Messiant. 2010. Investigating the cross-linguistic potential of verbnet: style classification. In Proceedings of the 23rd International Conference on Computational Linguistics, 862 COLING ’10, pages 1056–1064, Stroudsburg, PA, USA. Association for Computational Linguistics. R. S. Swier and S. Stevenson. 2005. Exploiting a verb lexicon in automatic semantic role labelling. In HLT/EMNLP. The Association for Computational Linguistics. K. van den Eynde and P. Mertens. 2003. La valence : l’approche pronominale et son application au lexique verbal. Journal of French Language Studies, 13:63– 104. 863
2012
90
Modeling Sentences in the Latent Space Weiwei Guo Department of Computer Science, Columbia University, [email protected] Mona Diab Center for Computational Learning Systems, Columbia University, [email protected] Abstract Sentence Similarity is the process of computing a similarity score between two sentences. Previous sentence similarity work finds that latent semantics approaches to the problem do not perform well due to insufficient information in single sentences. In this paper, we show that by carefully handling words that are not in the sentences (missing words), we can train a reliable latent variable model on sentences. In the process, we propose a new evaluation framework for sentence similarity: Concept Definition Retrieval. The new framework allows for large scale tuning and testing of Sentence Similarity models. Experiments on the new task and previous data sets show significant improvement of our model over baselines and other traditional latent variable models. Our results indicate comparable and even better performance than current state of the art systems addressing the problem of sentence similarity. 1 Introduction Identifying the degree of semantic similarity [SS] between two sentences is at the core of many NLP applications that focus on sentence level semantics such as Machine Translation (Kauchak and Barzilay, 2006), Summarization (Zhou et al., 2006), Text Coherence Detection (Lapata and Barzilay, 2005), etc.To date, almost all Sentence Similarity [SS] approaches work in the high-dimensional word space and rely mainly on word similarity. There are two main (not unrelated) disadvantages to word similarity based approaches: 1. lexical ambiguity as the pairwise word similarity ignores the semantic interaction between the word and its sentential context; 2. word co-occurrence information is not sufficiently exploited. Latent variable models, such as Latent Semantic Analysis [LSA] (Landauer et al., 1998), Probabilistic Latent Semantic Analysis [PLSA] (Hofmann, 1999), Latent Dirichlet Allocation [LDA] (Blei et al., 2003) can solve the two issues naturally by modeling the semantics of words and sentences simultaneously in the low-dimensional latent space. However, attempts at addressing SS using LSA perform significantly below high dimensional word similarity based models (Mihalcea et al., 2006; O’Shea et al., 2008). We believe that the latent semantics approaches applied to date to the SS problem have not yielded positive results due to the deficient modeling of the sparsity in the semantic space. SS operates in a very limited contextual setting where the sentences are typically very short to derive robust latent semantics. Apart from the SS setting, robust modeling of the latent semantics of short sentences/texts is becoming a pressing need due to the pervasive presence of more bursty data sets such as Twitter feeds and SMS where short contexts are an inherent characteristic of the data. In this paper, we propose to model the missing words (words that are not in the sentence), a feature that is typically overlooked in the text modeling literature, to address the sparseness issue for the SS task. We define the missing words of a sentence as the whole vocabulary in a corpus minus the observed words in the sentence. Our intuition is since observed words in a sentence are too few to tell us what the sentence is about, missing words can be used to tell us what the sentence is not about. We assume that the semantic space of both the observed and missing words make up the complete semantics profile of a sentence. After analyzing the way traditional latent variable models (LSA, PLSA/LDA) handle missing words, we decide to model sentences using a weighted matrix factorization approach (Srebro and Jaakkola, 2003), which allows us to treat observed words and missing words differently. We handle missing words using a weighting scheme that distinguishes missing words from observed words yielding robust latent vectors for sentences. Since we use a feature that is already implied by the text itself, our approach is very general (similar to LSA/LDA) in that it can be applied to any format of short texts. In contrast, existing work on modeling short texts focuses on exploiting additional data, e.g., Ramage et al. (2010) model tweets using their metadata (author, hashtag, etc.). Moreover in this paper, we introduce a new evaluation framework for SS: Concept Definition Retrieval (CDR). Compared to existing data sets, the CDR data set allows for large scale tuning and testing of SS modules without further human annotation. 2 Limitations of Topic Models and LSA for Modeling Sentences Usually latent variable models aim to find a latent semantic profile for a sentence that is most relevant to the observed words. By explicitly modeling missing words, we set another criterion to the latent semantics profile: it should not be related to the missing words in the sentence. However, missing words are not as informative as observed words, hence the need for a model that does a good job of modeling missing words at the right level of emphasis/impact is central to completing the semantic picture for a sentence. LSA and PLSA/LDA work on a word-sentence co-occurrence matrix. Given a corpus, the row entries of the matrix are the unique M words in the corpus, and the N columns are the sentence ids. The yielded M × N co-occurrence matrix X comprises the TF-IDF values in each Xij cell, namely that TFIDF value of word wi in sentence sj. For ease of exposition, we will illustrate the problem using a special case of the SS framework where the sentences are concept definitions in a dictionary such as WordNet (Fellbaum, 1998) (WN). Therefore, the sentence corresponding to the concept definition of bank#n#1 is a sparse vector in X containing the following observed words where Xij ̸= 0: the 0.1, financial 5.5, institution 4, that 0.2, accept 2.1, deposit 3, and 0.1, channel 6, the 0.1, money 5, into 0.3, lend 3.5, activity 3 All the other words (girl, car,..., check, loan, business,...) in matrix X that do not occur in the concept definition are considered missing words for the concept entry bank#n#1, thereby their Xij = 0 . Topic models (PLSA/LDA) do not explicitly model missing words. PLSA assumes each document has a distribution over K topics P(zk|dj), and each topic has a distribution over all vocabularies P(wi|zk). Therefore, PLSA finds a topic distribution for each concept definition that maximizes the log likelihood of the corpus X (LDA has a similar form): X i X j Xij log X k P(zk|dj)P(wi|zk) (1) In this formulation, missing words do not contribute to the estimation of sentence semantics, i.e., excluding missing words (Xij = 0) in equation 1 does not make a difference. However, empirical results show that given a small number of observed words, usually topic models can only find one topic (most evident topic) for a sentence, e.g., the concept definitions of bank#n#1 and stock#n#1 are assigned the financial topic only without any further discernability. This results in many sentences are assigned exactly the same semantics profile as long as they are pertaining/mentioned within the same domain/topic. The reason is topic models try to learn a 100dimension latent vector (assume dimension K = 100) from very few data points (10 observed words on average). It would be desirable if topic models can exploit missing words (a lot more data than observed words) to render more nuanced latent semantics, so that pairs of sentences in the same domain can be differentiable. On the other hand, LSA explicitly models missing words but not at the right level of emphasis. LSA finds another matrix ˆX (latent vectors) with rank K to approximate X using Singular Vector Decomposition (X ≈ˆX = UKΣKV ⊤ K ), such that the Frobefinancial sport institution Ro Rm Ro −Rm Ro −0.01Rm v1 1 0 0 20 600 -580 14 v2 0.2 0.3 0.2 5 100 -95 4 v3 0.6 0 0.1 18 300 -282 15 Table 1: Three possible latent vectors hypotheses for the definition of bank#n#1 nius norm of difference between the two matrices is minimized: v u u t X i X j  ˆ Xij −Xij 2 (2) In effect, LSA allows missing and observed words to equally impact the objective function. Given the inherent short length of the sentences, LSA (equation 2) allows for much more potential influence from missing words rather than observed words (99.9% cells are 0 in X). Hence the contribution of the observed words is significantly diminished. Moreover, the true semantics of the concept definitions is actually related to some missing words, but such true semantics will not be favored by the objective function, since equation 2 allows for too strong an impact by ˆXij = 0 for any missing word. Therefore the LSA model, in the context of short texts, is allowing missing words to have a significant “uncontrolled” impact on the model. 2.1 An Example The three latent semantics profiles in table 1 illustrate our analysis for topic models and LSA. Assume there are three dimensions: financial, sports, institution. We use Rv o to denote the sum of relatedness between latent vector v and all observed words; similarly, Rv m is the sum of relatedness between the vector v and all missing words. The first latent vector (generated by topic models) is chosen by maximizing Robs = 600. It suggests bank#n#1 is only related to the financial dimension. The second latent vector (found by LSA) has the maximum value of Robs −Rmiss = −95, but obviously the latent vector is not related to bank#n#1 at all. This is because LSA treats observed words and missing words equally the same, and due to the large number of missing words, the information of observed words is lost: Robs −Rmiss ≈−Rmiss. The third vector is the ideal semantics profile, since it is also related to the institution dimension. It has a slightly smaller Robs in comparison to the first vector, yet it has a substantially smaller Rmiss. In order to favor the ideal vector over other vectors, we simply need to adjust the objective function by assigning a smaller weight to Rmiss such as: Robs −0.01×Rmiss. Accordingly, we use weighted matrix factorization (Srebro and Jaakkola, 2003) to model missing words. 3 The Proposed Approach 3.1 Weighted Matrix Factorization The weighted matrix factorization [WMF] approach is very similar to SVD, except that it allows for direct control on each matrix cell Xij. The model factorizes the original matrix X into two matrices such that X ≈P ⊤Q, where P is a K × M matrix, and Q is a K × N matrix (figure 1). The model parameters (vectors in P and Q) are optimized by minimizing the objective function: X i X j Wij (P·,i · Q·,j −Xij)2 + λ||P||2 2 + λ||Q||2 2 (3) where λ is a free regularization factor, and the weight matrix W defines a weight for each cell in X. Accordingly, P·,i is a K-dimension latent semantics vector profile for word wi; similarly, Q·,j is the K-dimension vector profile that represents the sentence sj. Operations on these K-dimensional vectors have very intuitive semantic meanings: (1) the inner product of P·,i and Q·,j is used to approximate semantic relatedness of word wi and sentence sj: P·,i · Q·,j ≈Xij, as the shaded parts in Figure 1; (2) equation 3 explicitly requires a sentence should not be related to its missing words by forcing P·,i · Q·,j = 0 for missing words Xij = 0. (3) we can compute the similarity of two sentences sj and sj′ using the cosine similarity between Q·,j, Q·,j′. The latent vectors in P and Q are first randomly initialized, then can be computed iteratively by the following equations (derivation is omitted due to limited space, which can be found in (Srebro and Jaakkola, 2003)): P·,i =  Q ˜ W (i)Q⊤+ λI −1 Q ˜ W (i)X⊤ i,· Q·,j =  P ˜ W (j)P ⊤+ λI −1 P ˜ W (i)X·,j (4) Figure 1: Matrix Factorization where ˜W (i) = diag(W·,i) is an M × M diagonal matrix containing ith row of weight matrix W. Similarly, ˜W (j) = diag(W·,j) is an N × N diagonal matrix containing jth column of W. 3.2 Modeling Missing Words It is straightforward to implement the idea in Section 2.1 (choosing a latent vector that maximizes Robs −0.01 × Rmiss) in the WMF framework, by assigning a small weight for all the missing words and minimizing equation 3: Wi,j =  1, if Xij ̸= 0 wm, if Xij = 0 (5) We refer to our model as Weighted Textual Matrix Factorization [WTMF]. 1 This solution is quite elegant: 1. it explicitly tells the model that in general all missing words should not be related to the sentence; 2. meanwhile latent semantics are mainly generalized based on observed words, and the model is not penalized too much (wm is very small) when it is very confident that the sentence is highly related to a small subset of missing words based on their latent semantics profiles (bank#n#1 definition sentence is related to its missing words check loan). We adopt the same approach (assigning a small weight for some cells in WMF) proposed for recommender systems [RS] (Steck, 2010). In RS, an incomplete rating matrix R is proposed, where rows are users and columns are items. Typically, a user rates only some of the items, hence, the RS system needs to predict the missing ratings. Steck (2010) guesses a value for all the missing cells, and sets a small weight for those cells. Compared to (Steck, 2010), we are facing a different problem and targeting a different goal. We have a full matrix X where missing words have a 0 value, while the missing ratings in RS are unavailable – the values are unknown, hence R is not complete. In the RS setting, they are interested in predicting individual ratings, while we are interested in the sentence 1An efficient way to compute equation 4 is proposed in (Steck, 2010). semantics. More importantly, they do not have the sparsity issue (each user has rated over 100 items in the movie lens data2) and robust predictions can be made based on the observed ratings alone. 4 Evaluation for SS We need to show the impact of our proposed model WTMF on the SS task. However we are faced with a problem, the lack of a suitable large evaluation set from which we can derive robust observations. The two data sets we know of for SS are: 1. human-rated sentence pair similarity data set (Li et al., 2006) [LI06]; 2. the Microsoft Research Paraphrase Corpus (Dolan et al., 2004) [MSR04]. The LI06 data set consists of 65 pairs of noun definitions selected from the Collin Cobuild Dictionary. A subset of 30 pairs is further selected by LI06 to render the similarity scores evenly distributed. While this is the ideal data set for SS, the small size makes it impossible for tuning SS algorithms or deriving significant performance conclusions. On the other hand, the MSR04 data set comprises a much larger set of sentence pairs: 4,076 training and 1,725 test pairs. The ratings on the pairs are binary labels: similar/not similar. This is not a problem per se, however the issue is that it is very strict in its assignment of a positive label, for example the following sentence pair as cited in (Islam and Inkpen, 2008) is rated not semantically similar: Ballmer has been vocal in the past warning that Linux is a threat to Microsoft. In the memo, Ballmer reiterated the open-source threat to Microsoft. We believe that the ratings on a data set for SS should accommodate variable degrees of similarity with various ratings, however such a large scale set does not exist yet. Therefore for purposes of evaluating our proposed approach we devise a new framework inspired by the LI06 data set in that it comprises concept definitions but on a large scale. 4.1 Concept Definition Retrieval We define a new framework for evaluating SS and project it as a Concept Definition Retrieval (CDR) task where the data points are dictionary definitions. The intuition is that two definitions in different dic2http://www.grouplens.org/node/73, with 1M data set being the most widely used. tionaries referring to the same concept should be assigned large similarity. In this setting, we design the CDR task in a search engine style. The SS algorithm has access to all the definitions in WordNet (WN). Given an OntoNotes (ON) definition (Hovy et al., 2006), the SS algorithm should rank the equivalent WN definition as high as possible based on sentence similarity. The manual mapping already exists for ON to WN. One ON definition can be mapped to several WN definitions. After preprocessing we obtain 13669 ON definitions mapped to 19655 WN definitions. The data set has the advantage of being very large and it doesn’t require further human scrutiny. After the SS model learns the co-occurrence of words from WN definitions, in the testing phase, given an ON definition d, the SS algorithm needs to identify the equivalent WN definitions by computing the similarity values between all WN definitions and the ON definition d, then sorting the values in decreasing order. Clearly, it is very difficult to rank the one correct definition as highest out of all WN definitions (110,000 in total), hence we use ATOPd, area under the TOPKd(k) recall curve for an ON definition d, to measure the performance. Basically, it is the ranking of the correct WN definition among all WN definitions. The higher a model is able to rank the correct WN definition, the better its performance. Let Nd be the number of aligned WN definitions for the ON definition d, and Nk d be the number of aligned WN definitions in the top-k list. Then with a normalized k ∈[0,1], TOPKd(k) and ATOPd is defined as: TOPKd(k) = N k d /Nd ATOPd = Z 1 0 TOPKd(k)dk (6) ATOPd computes the normalized rank (in the range of [0, 1]) of aligned WN definitions among all WN definitions, with value 0.5 being the random case, and 1 being ranked as most similar. 5 Experiments and Results We evaluate WTMF on three data sets: 1. CDR data set using ATOP metric; 2. Human-rated Sentence Similarity data set [LI06] using Pearson and Spearman Correlation; 3. MSR Paraphrase corpus [MSR04] using accuracy. The performance of WTMF on CDR is compared with (a) an Information Retrieval model (IR) that is based on surface word matching, (b) an ngram model (N-gram) that captures phrase overlaps by returning the number of overlapping ngrams as the similarity score of two sentences, (c) LSA that uses svds() function in Matlab, and (d) LDA that uses Gibbs Sampling for inference (Griffiths and Steyvers, 2004). WTMF is also compared with all existing reported SS results on LI06 and MSR04 data sets, as well as LDA that is trained on the same data as WTMF. The similarity of two sentences is computed by cosine similarity (except N-gram). More details on each task will be explained in the subsections. To eliminate randomness in statistical models (WTMF and LDA), all the reported results are averaged over 10 runs. We run 20 iterations for WTMF. And we run 5000 iterations for LDA; each LDA model is averaged over the last 10 Gibbs Sampling iterations to get more robust predictions. The latent vector of a sentence is computed by: (1) using equation 4 in WTMF, or (2) summing up the latent vectors of all the constituent words weighted by Xij in LSA and LDA, similar to the work reported in (Mihalcea et al., 2006). For LDA the latent vector of a word is computed by P(z|w). It is worth noting that we could directly use the estimated topic distribution θj to represent a sentence, however, as discussed the topic distribution has only non-zero values on one or two topics, leading to a low ATOP value around 0.8. 5.1 Corpus The corpus we use comprises three dictionaries WN, ON, Wiktionary [Wik],3 Brown corpus. For all dictionaries, we only keep the definitions without examples, and discard the mapping between sense ids and definitions. All definitions are simply treated as individual documents. We crawl Wik and remove the entries that are not tagged as noun, verb, adjective, or adverb, resulting in 220, 000 entries. For the Brown corpus, each sentence is treated as a document in order to create more coherent co-occurrence values. All data is tokenized, pos-tagged4, and lem3http://en.wiktionary.org/wiki/Wiktionary:Main Page 4http://nlp.stanford.edu/software/tagger.shtml Models Parameters Dev Test 1. IR 0.8578 0.8515 2. N-gram 0.8238 0.8171 3. LSA 0.8218 0.8143 4a. LDA α = 0.1, β = 0.01 0.9466 ± 0.0020 0.9427 ± 0.0006 4b. LDA α = 0.05, β = 0.05 0.9506 ± 0.0017 0.9470 ± 0.0005 5. WTMF wm = 1, λ = 0 0.8273 ± 0.0028 0.8273 ± 0.0014 6. WTMF wm = 0, λ = 20 0.8745 ± 0.0058 0.8645 ± 0.0031 7a. WTMF wm = 0.01, λ = 20 0.9555 ± 0.0015 0.9511 ± 0.0003 7b. WTMF wm = 0.0005, λ = 20 0.9610 ± 0.0011 0.9558 ± 0.0004 Table 2: ATOP Values of Models (K = 100 for LSA/LDA/WTMF) matized5. The importance of words in a sentence is estimated by the TF-IDF schema. All the latent variable models (LSA, LDA, WTMF) are built on the same set of corpus: WN+Wik+Brown (393, 666 sentences and 4, 262, 026 words). Words that appear only once are removed. The test data is never used during training phrase. 5.2 Concept Definition Retrieval Among the 13669 ON definitions, 1000 definitions are randomly selected as a development set (dev) for picking best parameters in the models, and the rest is used as a test set (test). The performance of each model is evaluated by the average ATOPd value over the 12669 definitions (test). We use the subscript set in ATOPset to denote the average of ATOPd of a set of ON definitions, where d ∈{set}. If all the words in an ON definition are not covered in the training data (WN+Wik+Br), then ATOPd for this instance is set to 0.5. To compute ATOPd for an ON definition efficiently, we use the rank of the aligned WN definition among a random sample (size=1000) of WN definitions, to approximate its rank among all WN definitions. In practice, the difference between using 1000 samples and all data is tiny for ATOPtest (±0.0001), due to the large number of data points in CDR. We mainly compare the performance of IR, Ngram, LSA, LDA, and WTMF models. Generally results are reported based on the last iteration. However, we observe that for model 6 in table 2, the best performance occurs at the first few iterations. Hence for that model we use the ATOPdev to indicate when to stop. 5http://wn-similarity.sourceforge.net, WordNet::QueryData 5.2.1 Results Table 2 summarizes the ATOP values on the dev and test sets. All parameters are tuned based on the dev set. In LDA, we choose an optimal combination of α and β from {0.01, 0.05, 0.1, 0.5}.In WTMF, we choose the best parameters of weight wm for missing words and λ for regularization. We fix the dimension K = 100. Later in section 5.2.2, we will see that a larger value of K can further improve the performance. WTMF that models missing words using a small weight (model 7b with wm = 0.0005) outperforms the second best model LDA by a large margin. This is because LDA only uses 10 observed words to infer a 100 dimension vector for a sentence, while WTMF takes advantage of much more missing words to learn more robust latent semantics vectors. The IR model that works in word space achieves better ATOP scores than N-gram, although the idea of N-gram is commonly used in detecting paraphrases as well as machine translation. Applying TF-IDF for N-gram is better, but still the ATOPtest is not higher: 0.8467. The reason is words are enough to capture semantics for SS, while n-grams/phrases are used for a more fine-grained level of semantics. We also present model 5 and 6 (both are WTMF), to show the impact of: 1. modeling missing words with equal weights as observed words (wm = 1) (LSA manner), and 2. not modeling missing words at all (wm = 0) (LDA manner) in the context of WTMF model. As expected, both model 5 and model 6 generate much worse results. Both LDA and model 6 ignore missing words, with better ATOPtest scores achieved by LDA. This may be due to the different inference algorithms. Model 5 and LSA are comparable, where missing words are used with a large weight. Both of them yield low results. This confirms our assumption 0.0001 0.0005 0.001 0.005 0.01 0.05 0.94 0.945 0.95 0.955 wm ATOP WTMF Figure 2: missing words weight wm in WTMF 50 100 150 0.94 0.945 0.95 0.955 K ATOP WTMF LDA Figure 3: dimension K in WTMF and LDA that allowing for equal impact of both observed and missing words is not the correct characterization of the semantic space. 5.2.2 Analysis In these latent variable models, there are several essential parameters: weight of missing words wm, and dimension K. Figure 2 and 3 analyze the impact of these parameters on ATOPtest. Figure 2 shows the influence of wm on ATOPtest values. The peak ATOPtest is around wm = 0.0005, while other values of wm (except wm = 0.05) also yield high ATOP values (better than LDA). We also measure the influence of the dimension K = {50, 75, 100, 125, 150} on LDA and WTMF in Figure 3, where parameters for WTMF are wm = 0.0005, λ = 20, and for LDA are α = 0.05, β = 0.05. We can see WTMF consistently outperforms LDA by an ATOP value of 0.01 in each dimension. Although a larger K yields a better result, we still use a 100 due to computational complexity. 5.3 LI06: Human-rated Sentence Similarity We also assess WTMF and LDA model on LI06 data set. We still use K = 100. As we can see in Figure 2, choosing the appropriate parameter wm could boost the performance significantly. Since we do not have any tuning data for this task, we present Pearson’s correlation r for different values of wm in Table 3. In addition, to demonstrate that wm does not overfit the 30 data points, we also evaluate on 30 pairs 35 pairs wm r ρ r ρ 0.0005 0.8247 0.8440 0.4200 0.6006 0.001 0.8470 0.8636 0.4308 0.5985 0.005 0.8876 0.8966 0.4638 0.5809 0.01 0.8984 0.9091 0.4564 0.5450 0.05 0.8804 0.8812 0.4087 0.4766 Table 3: Different wm of WTMF on LI06 (K = 100) the other 35 pairs in LI06. Same as in (Tsatsaronis et al., 2010), we also include Spearman’s rank order correlation ρ, which is correlation of ranks of similarity values . Note that r and ρ are much lower for 35 pairs set, since most of the sentence pairs have a very low similarity (the average similarity value is 0.065 in 35 pairs set and 0.367 in 30 pairs set) and SS models need to identify the tiny difference among them, thereby rendering this set much harder to predict. Using wm = 0.01 gives the best results on 30 pairs while on 35 pairs the peak values of r and ρ happens when wm = 0.005. In general, the correlations in 30 pairs and in 35 pairs are consistent, which indicates wm = 0.01 or wm = 0.005 does not overfit the 30 pairs set. Compared to CDR, LI06 data set has a strong preference for a larger wm. This could be caused by different goals of the two tasks: CDR is evaluated by the rank of the most similar ones among all candidates, while the LI06 data set treats similar pairs and dissimilar pairs as equally important. Using a smaller wm means the similarity score is computed mainly from semantics of the observed words. This benefits CDR, since it gives more accurate similarity scores for those similar pairs, but not so accurate for dissimilar pairs. In fact, from Figure 2 and Table 2 we see that wm = 0.01 also produces a very high ATOPtest value in CDR. Table 4 shows the results of all current SS models with respect to the LI06 data set (30 pairs set). We cite their best performance for all reported results. Once the correct wm = 0.01 is chosen, WTMF results in the best Pearson’s r and best Spearman’s ρ (wm = 0.005 yields the second best r and ρ). Same as in CDR task, WTMF outperforms LDA by a large margin in both r and ρ. It indicates that the latent vectors induced by WTMF are able to not only identify same/similar sentences, but also identify the “correct” degree of dissimilar sentences. Model r ρ STASIS (Li et al., 2006) 0.8162 0.8126 (Liu et al., 2007) 0.841 0.8538 (Feng et al., 2008) 0.756 0.608 STS (Islam and Inkpen, 2008) 0.853 0.838 LSA (O’Shea et al., 2008) 0.8384 0.8714 Omiotis (Tsatsaronis et al., 2010) 0.856 0.8905 WSD-STS (Ho et al., 2010) 0.864 0.8341 SPD-STS (Ho et al., 2010) 0.895 0.9034 LDA (α = 0.05, β = 0.05) 0.8422 0.8663 WTMF (wm = 0.005, λ = 20) 0.8876 0.8966 WTMF (wm = 0.01, λ = 20) 0.8984 0.9091 Table 4: Pearson’s correlation r and Spearman’s correlation ρ on LI06 30 pairs Model Accuracy Random 51.3 LSA (Mihalcea et al., 2006) 68.4 full model (Mihalcea et al., 2006) 70.3 STS (Islam and Inkpen, 2008) 72.6 Omiotis (Tsatsaronis et al., 2010) 69.97 LDA (α = 0.05, β = 0.05) 68.6 WTMF (wm = 0.01, λ = 20) 71.51 Table 5: Performance on MSR04 test set 5.4 MSR04: MSR Paraphrase Corpus Finally, we briefly discuss results of applying WTMF on MSR04 data. We use the same parameter setting used for the LI06 evaluation setting since both sets are human-rated sentence pairs (λ = 20, wm = 0.01, K = 100). We use the training set of MSR04 data to select a threshold of sentence similarity for the binary label. Table 5 summarizes the accuracy of other SS models noted in the literature and evaluated on MSR04 test set. Compared to previous SS work and LDA, WTMF has the second best accuracy. It suggests that WTMF is quite competitive in the paraphrase recognition task. It is worth noting that the best system on MSR04, STS (Islam and Inkpen, 2008), has much lower correlations on LI06 data set. The second best system among previous work on LI06 uses Spearman correlation, Omiotis (Tsatsaronis et al., 2010), and it yields a much worse accuracy on MSR04. The other works do not evaluate on both data sets. 6 Related Work Almost all current SS methods work in the highdimensional word space, and rely heavily on word/sense similarity measures, which is knowledge based (Li et al., 2006; Feng et al., 2008; Ho et al., 2010; Tsatsaronis et al., 2010), corpus-based (Islam and Inkpen, 2008) or hybrid (Mihalcea et al., 2006). Almost all of them are evaluated on LI06 data set. It is interesting to see that most works find word similarity measures, especially knowledge based ones, to be the most effective component, while other features do not work well (such as word order or syntactic information). Mihalcea et al. (2006) use LSA as a baseline, and O’Shea et al. (2008) train LSA on regular length documents. Both results are considerably lower than word similarity based methods. Hence, our work is the first to successfully approach SS in the latent space. Although there has been work modeling latent semantics for short texts (tweets) in LDA, the focus has been on exploiting additional features in Twitter, hence restricted to Twitter data. Ramage et al. (2010) use tweet metadata (author, hashtag) as some supervised information to model tweets. Jin et al. (2011) use long similar documents (the article that is referred by a url in tweets) to help understand the tweet. In contrast, our approach relies solely on the information in the texts by modeling local missing words, and does not need any additional data, which renders our approach much more widely applicable. 7 Conclusions We explicitly model missing words to alleviate the sparsity problem in modeling short texts. We also propose a new evaluation framework for sentence similarity that allows large scale tuning and testing. Experiment results on three data sets show that our model WTMF significantly outperforms existing methods. For future work, we would like to compare the text modeling performance of WTMF with LSA and LDA on regular length documents. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the U.S. Army Research Lab. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the U.S. Government. References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3. William Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th International Conference on Computational Linguistics. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. Jin Feng, Yi-Ming Zhou, and Trevor Martin. 2008. Sentence similarity based on relevance. In Proceedings of IPMU. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101. Chukfong Ho, Masrah Azrifah Azmi Murad, Rabiah Abdul Kadir, and Shyamala C. Doraisamy. 2010. Word sense disambiguation-based sentence similarity. In Proceedings of the 23rd International Conference on Computational Linguistics. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL. Aminul Islam and Diana Inkpen. 2008. Semantic text similarity using corpus-based word similarity and string similarity. ACM Transactions on Knowledge Discovery from Data, 2. Ou Jin, Nathan N. Liu, Kai Zhao, Yong Yu, and Qiang Yang. 2011. Transferring topical knowledge from auxiliary long texts for short text clustering. In Proceedings of the 20th ACM international conference on Information and knowledge management. David Kauchak and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL. Thomas K Landauer, Peter W. Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse Processes, 25. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In Proceedings of the 19th International Joint Conference on Artificial Intelligence. Yuhua Li, Davi d McLean, Zuhair A. Bandar, James D. O Shea, and Keeley Crockett. 2006. Sentence similarity based on semantic nets and corpus statistics. IEEE Transaction on Knowledge and Data Engineering, 18. Xiao-Ying Liu, Yi-Ming Zhou, and Ruo-Shi Zheng. 2007. Sentence similarity based on dynamic time warping. In The International Conference on Semantic Computing. Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the 21st National Conference on Articial Intelligence. James O’Shea, Zuhair Bandar, Keeley Crockett, and David McLean. 2008. A comparative study of two short text semantic similarity measures. In Proceedings of the Agent and Multi-Agent Systems: Technologies and Applications, Second KES International Symposium (KES-AMSTA). Daniel Ramage, Susan Dumais, and Dan Liebling. 2010. Characterizing microblogs with topic models. In Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media. Nathan Srebro and Tommi Jaakkola. 2003. Weighted low-rank approximations. In Proceedings of the Twentieth International Conference on Machine Learning. Harald Steck. 2010. Training and testing of recommender systems on data missing not at random. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. George Tsatsaronis, Iraklis Varlamis, and Michalis Vazirgiannis. 2010. Text relatedness based on a word thesaurus. Journal of Articial Intelligence Research, 37. Liang Zhou, Chin-Yew Lin, Dragos Stefan Munteanu, and Eduard Hovy. 2006. Paraeval: Using paraphrases to evaluate summaries automatically. In Proceedings of Human Language Tech-nology Conference of the North American Chapter of the ACL,.
2012
91
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 873–882, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics Improving Word Representations via Global Context and Multiple Word Prototypes Eric H. Huang, Richard Socher∗, Christopher D. Manning, Andrew Y. Ng Computer Science Department, Stanford University, Stanford, CA 94305, USA {ehhuang,manning,ang}@stanford.edu, ∗[email protected] Abstract Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models. 1 1 Introduction Vector-space models (VSM) represent word meanings with vectors that capture semantic and syntactic information of words. These representations can be used to induce similarity measures by computing distances between the vectors, leading to many useful applications, such as information retrieval (Manning et al., 2008), document classification (Sebastiani, 2002) and question answering (Tellex et al., 2003). 1The dataset and word vectors can be downloaded at http://ai.stanford.edu/∼ehhuang/. Despite their usefulness, most VSMs share a common problem that each word is only represented with one vector, which clearly fails to capture homonymy and polysemy. Reisinger and Mooney (2010b) introduced a multi-prototype VSM where word sense discrimination is first applied by clustering contexts, and then prototypes are built using the contexts of the sense-labeled words. However, in order to cluster accurately, it is important to capture both the syntax and semantics of words. While many approaches use local contexts to disambiguate word meaning, global contexts can also provide useful topical information (Ng and Zelle, 1997). Several studies in psychology have also shown that global context can help language comprehension (Hess et al., 1995) and acquisition (Li et al., 2000). We introduce a new neural-network-based language model that distinguishes and uses both local and global context via a joint training objective. The model learns word representations that better capture the semantics of words, while still keeping syntactic information. These improved representations can be used to represent contexts for clustering word instances, which is used in the multi-prototype version of our model that accounts for words with multiple senses. We evaluate our new model on the standard WordSim-353 (Finkelstein et al., 2001) dataset that includes human similarity judgments on pairs of words, showing that combining both local and global context outperforms using only local or global context alone, and is competitive with stateof-the-art methods. However, one limitation of this evaluation is that the human judgments are on pairs 873 Global Context Local Context scorel scoreg Document he walks to the bank ... ... sum score river water shore global semantic vector ⋮ play weighted average Figure 1: An overview of our neural language model. The model makes use of both local and global context to compute a score that should be large for the actual next word (bank in the example), compared to the score for other words. When word meaning is still ambiguous given local context, information in global context can help disambiguation. of words presented in isolation, ignoring meaning variations in context. Since word interpretation in context is important especially for homonymous and polysemous words, we introduce a new dataset with human judgments on similarity between pairs of words in sentential context. To capture interesting word pairs, we sample different senses of words using WordNet (Miller, 1995). The dataset includes verbs and adjectives, in addition to nouns. We show that our multi-prototype model improves upon the single-prototype version and outperforms other neural language models and baselines on this dataset. 2 Global Context-Aware Neural Language Model In this section, we describe the training objective of our model, followed by a description of the neural network architecture, ending with a brief description of our model’s training method. 2.1 Training Objective Our model jointly learns word representations while learning to discriminate the next word given a short word sequence (local context) and the document (global context) in which the word sequence occurs. Because our goal is to learn useful word representations and not the probability of the next word given previous words (which prohibits looking ahead), our model can utilize the entire document to provide global context. Given a word sequence s and document d in which the sequence occurs, our goal is to discriminate the correct last word in s from other random words. We compute scores g(s, d) and g(sw, d) where sw is s with the last word replaced by word w, and g(·, ·) is the scoring function that represents the neural networks used. We want g(s, d) to be larger than g(sw, d) by a margin of 1, for any other word w in the vocabulary, which corresponds to the training objective of minimizing the ranking loss for each (s, d) found in the corpus: Cs,d = X w∈V max(0, 1 −g(s, d) + g(sw, d)) (1) Collobert and Weston (2008) showed that this ranking approach can produce good word embeddings that are useful in several NLP tasks, and allows much faster training of the model compared to optimizing log-likelihood of the next word. 2.2 Neural Network Architecture We define two scoring components that contribute to the final score of a (word sequence, document) pair. The scoring components are computed by two neural networks, one capturing local context and the other global context, as shown in Figure 1. We now describe how each scoring component is computed. The score of local context uses the local word sequence s. We first represent the word sequence s as 874 an ordered list of vectors x = (x1, x2, ..., xm) where xi is the embedding of word i in the sequence, which is a column in the embedding matrix L ∈Rn×|V | where |V | denotes the size of the vocabulary. The columns of this embedding matrix L are the word vectors and will be learned and updated during training. To compute the score of local context, scorel, we use a neural network with one hidden layer: a1 = f(W1[x1; x2; ...; xm] + b1) (2) scorel = W2a1 + b2 (3) where [x1; x2; ...; xm] is the concatenation of the m word embeddings representing sequence s, f is an element-wise activation function such as tanh, a1 ∈Rh×1 is the activation of the hidden layer with h hidden nodes, W1 ∈Rh×(mn) and W2 ∈R1×h are respectively the first and second layer weights of the neural network, and b1, b2 are the biases of each layer. For the score of the global context, we represent the document also as an ordered list of word embeddings, d = (d1, d2, ..., dk). We first compute the weighted average of all word vectors in the document: c = Pk i=1 w(ti)di Pk i=1 w(ti) (4) where w(·) can be any weighting function that captures the importance of word ti in the document. We use idf-weighting as the weighting function. We use a two-layer neural network to compute the global context score, scoreg, similar to the above: a1(g) = f(W (g) 1 [c; xm] + b(g) 1 ) (5) scoreg = W (g) 2 a(g) 1 + b(g) 2 (6) where [c; xm] is the concatenation of the weighted average document vector and the vector of the last word in s, a1(g) ∈Rh(g)×1 is the activation of the hidden layer with h(g) hidden nodes, W (g) 1 ∈ Rh(g)×(2n) and W (g) 2 ∈R1×h(g) are respectively the first and second layer weights of the neural network, and b(g) 1 , b(g) 2 are the biases of each layer. Note that instead of using the document where the sequence occurs, we can also specify a fixed k > m that captures larger context. The final score is the sum of the two scores: score = scorel + scoreg (7) The local score preserves word order and syntactic information, while the global score uses a weighted average which is similar to bag-of-words features, capturing more of the semantics and topics of the document. Note that Collobert and Weston (2008)’s language model corresponds to the network using only local context. 2.3 Learning Following Collobert and Weston (2008), we sample the gradient of the objective by randomly choosing a word from the dictionary as a corrupt example for each sequence-document pair, (s, d), and take the derivative of the ranking loss with respect to the parameters: weights of the neural network and the embedding matrix L. These weights are updated via backpropagation. The embedding matrix L is the word representations. We found that word embeddings move to good positions in the vector space faster when using mini-batch L-BFGS (Liu and Nocedal, 1989) with 1000 pairs of good and corrupt examples per batch for training, compared to stochastic gradient descent. 3 Multi-Prototype Neural Language Model Despite distributional similarity models’ successful applications in various NLP tasks, one major limitation common to most of these models is that they assume only one representation for each word. This single-prototype representation is problematic because many words have multiple meanings, which can be wildly different. Using one representation simply cannot capture the different meanings. Moreover, using all contexts of a homonymous or polysemous word to build a single prototype could hurt the representation, which cannot represent any one of the meanings well as it is influenced by all meanings of the word. Instead of using only one representation per word, Reisinger and Mooney (2010b) proposed the multiprototype approach for vector-space models, which uses multiple representations to capture different senses and usages of a word. We show how our 875 model can readily adopt the multi-prototype approach. We present a way to use our learned single-prototype embeddings to represent each context window, which can then be used by clustering to perform word sense discrimination (Sch¨utze, 1998). In order to learn multiple prototypes, we first gather the fixed-sized context windows of all occurrences of a word (we use 5 words before and after the word occurrence). Each context is represented by a weighted average of the context words’ vectors, where again, we use idf-weighting as the weighting function, similar to the document context representation described in Section 2.2. We then use spherical k-means to cluster these context representations, which has been shown to model semantic relations well (Dhillon and Modha, 2001). Finally, each word occurrence in the corpus is re-labeled to its associated cluster and is used to train the word representation for that cluster. Similarity between a pair of words (w, w′) using the multi-prototype approach can be computed with or without context, as defined by Reisinger and Mooney (2010b): AvgSimC(w, w′) = 1 K2 k X i=1 k X j=1 p(c, w, i)p(c′, w′, j)d(µi(w), µj(w′)) (8) where p(c, w, i) is the likelihood that word w is in its cluster i given context c, µi(w) is the vector representing the i-th cluster centroid of w, and d(v, v′) is a function computing similarity between two vectors, which can be any of the distance functions presented by Curran (2004). The similarity measure can be computed in absence of context by assuming uniform p(c, w, i) over i. 4 Experiments In this section, we first present a qualitative analysis comparing the nearest neighbors of our model’s embeddings with those of others, showing our embeddings better capture the semantics of words, with the use of global context. Our model also improves the correlation with human judgments on a word similarity task. Because word interpretation in context is important, we introduce a new dataset with human judgments on similarity of pairs of words in sentential context. Finally, we show that our model outperforms other methods on this dataset and also that the multi-prototype approach improves over the singleprototype approach. We chose Wikipedia as the corpus to train all models because of its wide range of topics and word usages, and its clean organization of document by topic. We used the April 2010 snapshot of the Wikipedia corpus (Shaoul and Westbury, 2010), with a total of about 2 million articles and 990 million tokens. We use a dictionary of the 30,000 most frequent words in Wikipedia, converted to lower case. In preprocessing, we keep the frequent numbers intact and replace each digit of the uncommon numbers to “DG” so as to preserve information such as it being a year (e.g. “DGDGDGDG”). The converted numbers that are rare are mapped to a NUMBER token. Other rare words not in the dictionary are mapped to an UNKNOWN token. For all experiments, our models use 50dimensional embeddings. We use 10-word windows of text as the local context, 100 hidden units, and no weight regularization for both neural networks. For multi-prototype variants, we fix the number of prototypes to be 10. 4.1 Qualitative Evaluations In order to show that our model learns more semantic word representations with global context, we give the nearest neighbors of our single-prototype model versus C&W’s, which only uses local context. The nearest neighbors of a word are computed by comparing the cosine similarity between the center word and all other words in the dictionary. Table 1 shows the nearest neighbors of some words. The nearest neighbors of “market” that C&W’s embeddings give are more constrained by the syntactic constraint that words in plural form are only close to other words in plural form, whereas our model captures that the singular and plural forms of a word are similar in meaning. Other examples show that our model induces nearest neighbors that better capture semantics. Table 2 shows the nearest neighbors of our model using the multi-prototype approach. We see that the clustering is able to group contexts of different 876 Center Word C&W Our Model markets firms, industries, stores market, firms, businesses American Australian, Indian, Italian U.S., Canadian, African illegal alleged, overseas, banned harmful, prohibited, convicted Table 1: Nearest neighbors of words based on cosine similarity. Our model is less constrained by syntax and is more semantic. Center Word Nearest Neighbors bank 1 corporation, insurance, company bank 2 shore, coast, direction star 1 movie, film, radio star 2 galaxy, planet, moon cell 1 telephone, smart, phone cell 2 pathology, molecular, physiology left 1 close, leave, live left 2 top, round, right Table 2: Nearest neighbors of word embeddings learned by our model using the multi-prototype approach based on cosine similarity. The clustering is able to find the different meanings, usages, and parts of speech of the words. meanings of a word into separate groups, allowing our model to learn multiple meaningful representations of a word. 4.2 WordSim-353 A standard dataset for evaluating vector-space models is the WordSim-353 dataset (Finkelstein et al., 2001), which consists of 353 pairs of nouns. Each pair is presented without context and associated with 13 to 16 human judgments on similarity and relatedness on a scale from 0 to 10. For example, (cup,drink) received an average score of 7.25, while (cup,substance) received an average score of 1.92. Table 3 shows our results compared to previous methods, including C&W’s language model and the hierarchical log-bilinear (HLBL) model (Mnih and Hinton, 2008), which is a probabilistic, linear neural model. We downloaded these embeddings from Turian et al. (2010). These embeddings were trained on the smaller corpus RCV1 that contains one year of Reuters English newswire, and show similar correlations on the dataset. We report the result of Model Corpus ρ × 100 Our Model-g Wiki. 22.8 C&W RCV1 29.5 HLBL RCV1 33.2 C&W* Wiki. 49.8 C&W Wiki. 55.3 Our Model Wiki. 64.2 Our Model* Wiki. 71.3 Pruned tf-idf Wiki. 73.4 ESA Wiki. 75 Tiered Pruned tf-idf Wiki. 76.9 Table 3: Spearman’s ρ correlation on WordSim-353, showing our model’s improvement over previous neural models for learning word embeddings. C&W* is the word embeddings trained and provided by C&W. Our Model* is trained without stop words, while Our Modelg uses only global context. Pruned tf-idf (Reisinger and Mooney, 2010b) and ESA (Gabrilovich and Markovitch, 2007) are also included. our re-implementation of C&W’s model trained on Wikipedia, showing the large effect of using a different corpus. Our model is able to learn more semantic word embeddings and noticeably improves upon C&W’s model. Note that our model achieves higher correlation (64.2) than either using local context alone (C&W: 55.3) or using global context alone (Our Model-g: 22.8). We also found that correlation can be further improved by removing stop words (71.3). Thus, each window of text (training example) contains more information but still preserves some syntactic information as the words are still ordered in the local context. 4.3 New Dataset: Word Similarity in Context The many previous datasets that associate human judgments on similarity between pairs of words, such as WordSim-353, MC (Miller and Charles, 1991) and RG (Rubenstein and Goodenough, 1965), have helped to advance the development of vectorspace models. However, common to all datasets is that similarity scores are given to pairs of words in isolation. This is problematic because the meanings of homonymous and polysemous words depend highly on the words’ contexts. For example, in the two phrases, “he swings the baseball bat” and “the 877 Word 1 Word 2 Located downtown along the east bank of the Des Moines River ... This is the basis of all money laundering , a track record of depositing clean money before slipping through dirty money ... Inside the ruins , there are bats and a bowl with Pokeys that fills with sand over the course of the race , and the music changes somewhat while inside ... An aggressive lower order batsman who usually bats at No. 11 , Muralitharan is known for his tendency to back away to leg and slog ... An example of legacy left in the Mideast from these nobles is the Krak des Chevaliers ’ enlargement by the Counts of Tripoli and Toulouse ... ... one should not adhere to a particular explanation , only in such measure as to be ready to abandon it if it be proved with certainty to be false ... ... and Andy ’s getting ready to pack his bags and head up to Los Angeles tomorrow to get ready to fly back home on Thursday ... she encounters Ben ( Duane Jones ) , who arrives in a pickup truck and defends the house against another pack of zombies ... In practice , there is an unknown phase delay between the transmitter and receiver that must be compensated by ” synchronization ” of the receivers local oscillator ... but Gilbert did not believe that she was dedicated enough , and when she missed a rehearsal , she was dismissed ... Table 4: Example pairs from our new dataset. Note that words in a pair can be the same word and have different parts of speech. bat flies”, bat has completely different meanings. It is unclear how this variation in meaning is accounted for in human judgments of words presented without context. One of the main contributions of this paper is the creation of a new dataset that addresses this issue. The dataset has three interesting characteristics: 1) human judgments are on pairs of words presented in sentential context, 2) word pairs and their contexts are chosen to reflect interesting variations in meanings of homonymous and polysemous words, and 3) verbs and adjectives are present in addition to nouns. We now describe our methodology in constructing the dataset. 4.3.1 Dataset Construction Our procedure of constructing the dataset consists of three steps: 1) select a list a words, 2) for each word, select another word to form a pair, 3) for each word in a pair, find a sentential context. We now describe each step in detail. In step 1, in order to make sure we select a diverse list of words, we consider three attributes of a word: frequency in a corpus, number of parts of speech, and number of synsets according to WordNet. For frequency, we divide words into three groups, top 2,000 most frequent, between 2,000 and 5,000, and between 5,000 to 10,000 based on occurrences in Wikipedia. For number of parts of speech, we group words based on their number of possible parts of speech (noun, verb or adjective), from 1 to 3. We also group words by their number of synsets: [0,5], [6,10], [11, 20], and [20, max]. Finally, we sample at most 15 words from each combination in the Cartesian product of the above groupings. In step 2, for each of the words selected in step 1, we want to choose the other word so that the pair captures an interesting relationship. Similar to Manandhar et al. (2010), we use WordNet to first randomly select one synset of the first word, we then construct a set of words in various relations to the first word’s chosen synset, including hypernyms, hyponyms, holonyms, meronyms and attributes. We randomly select a word from this set of words as the second word in the pair. We try to repeat the above twice to generate two pairs for each word. In addition, for words with more than five synsets, we allow the second word to be the same as the first, but with different synsets. We end up with pairs of words as well as the one chosen synset for each word in the pairs. In step 3, we aim to extract a sentence from Wikipedia for each word, which contains the word and corresponds to a usage of the chosen synset. We first find all sentences in which the word occurs. We then POS tag2 these sentences and filter out those that do not match the chosen POS. To find the 2We used the MaxEnt Treebank POS tagger in the python nltk library. 878 Model ρ × 100 C&W-S 57.0 Our Model-S 58.6 Our Model-M AvgSim 62.8 Our Model-M AvgSimC 65.7 tf-idf-S 26.3 Pruned tf-idf-S 62.5 Pruned tf-idf-M AvgSim 60.4 Pruned tf-idf-M AvgSimC 60.5 Table 5: Spearman’s ρ correlation on our new dataset. Our Model-S uses the single-prototype approach, while Our Model-M uses the multi-prototype approach. AvgSim calculates similarity with each prototype contributing equally, while AvgSimC weighs the prototypes according to probability of the word belonging to that prototype’s cluster. word usages that correspond to the chosen synset, we first construct a set of related words of the chosen synset, including hypernyms, hyponyms, holonyms, meronyms and attributes. Using this set of related words, we filter out a sentence if the document in which the sentence appears does not include one of the related words. Finally, we randomly select one sentence from those that are left. Table 4 shows some examples from the dataset. Note that the dataset also includes pairs of the same word. Single-prototype models would give the max similarity score for those pairs, which can be problematic depending on the words’ contexts. This dataset requires models to examine context when determining word meaning. Using Amazon Mechanical Turk, we collected 10 human similarity ratings for each pair, as Snow et al. (2008) found that 10 non-expert annotators can achieve very close inter-annotator agreement with expert raters. To ensure worker quality, we only allowed workers with over 95% approval rate to work on our task. Furthermore, we discarded all ratings by a worker if he/she entered scores out of the accepted range or missed a rating, signaling lowquality work. We obtained a total of 2,003 word pairs and their sentential contexts. The word pairs consist of 1,712 unique words. Of the 2,003 word pairs, 1328 are noun-noun pairs, 399 verb-verb, 140 verb-noun, 97 adjective-adjective, 30 noun-adjective, and 9 verbadjective. 241 pairs are same-word pairs. 4.3.2 Evaluations on Word Similarity in Context For evaluation, we also compute Spearman correlation between a model’s computed similarity scores and human judgments. Table 5 compares different models’ results on this dataset. We compare against the following baselines: tf-idf represents words in a word-word matrix capturing co-occurrence counts in all 10-word context windows. Reisinger and Mooney (2010b) found pruning the low-value tf-idf features helps performance. We report the result of this pruning technique after tuning the threshold value on this dataset, removing all but the top 200 features in each word vector. We tried the same multi-prototype approach and used spherical k-means3 to cluster the contexts using tf-idf representations, but obtained lower numbers than singleprototype (55.4 with AvgSimC). We then tried using pruned tf-idf representations on contexts with our clustering assignments (included in Table 5), but still got results worse than the single-prototype version of the pruned tf-idf model (60.5 with AvgSimC). This suggests that the pruned tf-idf representations might be more susceptible to noise or mistakes in context clustering. By utilizing global context, our model outperforms C&W’s vectors and the above baselines on this dataset. With multiple representations per word, we show that the multi-prototype approach can improve over the single-prototype version without using context (62.8 vs. 58.6). Moreover, using AvgSimC4 which takes contexts into account, the multi-prototype model obtains the best performance (65.7). 5 Related Work Neural language models (Bengio et al., 2003; Mnih and Hinton, 2007; Collobert and Weston, 2008; Schwenk and Gauvain, 2002; Emami et al., 2003) have been shown to be very powerful at language modeling, a task where models are asked to accurately predict the next word given previously seen words. By using distributed representations of 3We first tried movMF as in Reisinger and Mooney (2010b), but were unable to get decent results (only 31.5). 4probability of being in a cluster is calculated as the inverse of the distance to the cluster centroid. 879 words which model words’ similarity, this type of models addresses the data sparseness problem that n-gram models encounter when large contexts are used. Most of these models used relative local contexts of between 2 to 10 words. Schwenk and Gauvain (2002) tried to incorporate larger context by combining partial parses of past word sequences and a neural language model. They used up to 3 previous head words and showed increased performance on language modeling. Our model uses a similar neural network architecture as these models and uses the ranking-loss training objective proposed by Collobert and Weston (2008), but introduces a new way to combine local and global context to train word embeddings. Besides language modeling, word embeddings induced by neural language models have been useful in chunking, NER (Turian et al., 2010), parsing (Socher et al., 2011b), sentiment analysis (Socher et al., 2011c) and paraphrase detection (Socher et al., 2011a). However, they have not been directly evaluated on word similarity tasks, which are important for tasks such as information retrieval and summarization. Our experiments show that our word embeddings are competitive in word similarity tasks. Most of the previous vector-space models use a single vector to represent a word even though many words have multiple meanings. The multi-prototype approach has been widely studied in models of categorization in psychology (Rosseel, 2002; Griffiths et al., 2009), while Sch¨utze (1998) used clustering of contexts to perform word sense discrimination. Reisinger and Mooney (2010b) combined the two approaches and applied them to vector-space models, which was further improved in Reisinger and Mooney (2010a). Two other recent papers (Dhillon et al., 2011; Reddy et al., 2011) present models for constructing word representations that deal with context. It would be interesting to evaluate those models on our new dataset. Many datasets with human similarity ratings on pairs of words, such as WordSim-353 (Finkelstein et al., 2001), MC (Miller and Charles, 1991) and RG (Rubenstein and Goodenough, 1965), have been widely used to evaluate vector-space models. Motivated to evaluate composition models, Mitchell and Lapata (2008) introduced a dataset where an intransitive verb, presented with a subject noun, is compared to another verb chosen to be either similar or dissimilar to the intransitive verb in context. The context is short, with only one word, and only verbs are compared. Erk and Pad´o (2008), Thater et al. (2011) and Dinu and Lapata (2010) evaluated word similarity in context with a modified task where systems are to rerank gold-standard paraphrase candidates given the SemEval 2007 Lexical Substitution Task dataset. This task only indirectly evaluates similarity as only reranking of already similar words are evaluated. 6 Conclusion We presented a new neural network architecture that learns more semantic word representations by using both local and global context in learning. These learned word embeddings can be used to represent word contexts as low-dimensional weighted average vectors, which are then clustered to form different meaning groups and used to learn multi-prototype vectors. We introduced a new dataset with human judgments on similarity between pairs of words in context, so as to evaluate model’s abilities to capture homonymy and polysemy of words in context. Our new multi-prototype neural language model outperforms previous neural models and competitive baselines on this new dataset. Acknowledgments The authors gratefully acknowledges the support of the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181, and the DARPA Deep Learning program under contract number FA865010-C-7020. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL, or the US government. References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, Christian Jauvin, Jaz K, Thomas Hofmann, Tomaso Poggio, and John Shawe-taylor. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. 880 Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, ICML ’08, pages 160–167, New York, NY, USA. ACM. James Richard Curran. 2004. From distributional to semantic similarity. Technical report. Inderjit S. Dhillon and Dharmendra S. Modha. 2001. Concept decompositions for large sparse text data using clustering. Mach. Learn., 42:143–175, January. Paramveer S. Dhillon, Dean Foster, and Lyle Ungar. 2011. Multi-view learning of word embeddings via cca. In Advances in Neural Information Processing Systems (NIPS), volume 24. Georgiana Dinu and Mirella Lapata. 2010. Measuring distributional similarity in context. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1162–1172, Stroudsburg, PA, USA. Association for Computational Linguistics. Ahmad Emami, Peng Xu, and Frederick Jelinek. 2003. Using a connectionist model in a syntactical based language model. In Acoustics, Speech, and Signal Processing, pages 372–375. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 897–906, Stroudsburg, PA, USA. Association for Computational Linguistics. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: the concept revisited. In Proceedings of the 10th international conference on World Wide Web, WWW ’01, pages 406–414, New York, NY, USA. ACM. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI’07, pages 1606–1611, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Thomas L Griffiths, Kevin R Canini, Adam N Sanborn, and Daniel J Navarro. 2009. Unifying rational models of categorization via the hierarchical dirichlet process. Brain, page 323328. David J Hess, Donald J Foss, and Patrick Carroll. 1995. Effects of global and local context on lexical processing during language comprehension. Journal of Experimental Psychology: General, 124(1):62–82. Ping Li, Curt Burgess, and Kevin Lund. 2000. The acquisition of word meaning through global lexical cooccurrences. D. C. Liu and J. Nocedal. 1989. On the limited memory bfgs method for large scale optimization. Math. Program., 45(3):503–528, December. Suresh Manandhar, Ioannis P Klapaftis, Dmitriy Dligach, and Sameer S Pradhan. 2010. Semeval-2010 task 14: Word sense induction & disambiguation. Word Journal Of The International Linguistic Association, (July):63–68. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language & Cognitive Processes, 6(1):1–28. George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38:39–41. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In In Proceedings of ACL-08: HLT, pages 236–244. Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th international conference on Machine learning, ICML ’07, pages 641–648, New York, NY, USA. ACM. Andriy Mnih and Geoffrey Hinton. 2008. A scalable hierarchical distributed language model. In In NIPS. Ht Ng and J Zelle. 1997. Corpus-based approaches to semantic interpretation in natural language processing. AI Magazine, 18(4):45–64. Siva Reddy, Ioannis Klapaftis, Diana McCarthy, and Suresh Manandhar. 2011. Dynamic and static prototype vectors for semantic composition. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 705–713, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Joseph Reisinger and Raymond Mooney. 2010a. A mixture model with sharing for lexical semantics. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1173–1182, Stroudsburg, PA, USA. Association for Computational Linguistics. Joseph Reisinger and Raymond J. Mooney. 2010b. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 109–117, Stroudsburg, PA, USA. Association for Computational Linguistics. Yves Rosseel. 2002. Mixture models of categorization. Journal of Mathematical Psychology, 46:178–210. 881 Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM, 8:627–633, October. Hinrich Sch¨utze. 1998. Automatic word sense discrimination. Journal of Computational Linguistics, 24:97– 123. Holger Schwenk and Jean-luc Gauvain. 2002. Connectionist language modeling for large vocabulary continuous speech recognition. In In International Conference on Acoustics, Speech and Signal Processing, pages 765–768. Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Comput. Surv., 34:1– 47, March. Cyrus Shaoul and Chris Westbury. 2010. The westbury lab wikipedia corpus. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 254–263, Stroudsburg, PA, USA. Association for Computational Linguistics. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011a. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems 24. Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christopher D. Manning. 2011b. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 26th International Conference on Machine Learning (ICML). Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011c. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stefanie Tellex, Boris Katz, Jimmy Lin, Aaron Fernandes, and Gregory Marton. 2003. Quantitative evaluation of passage retrieval algorithms for question answering. In Proceedings of the 26th Annual International ACM SIGIR Conference on Search and Development in Information Retrieval, pages 41–47. ACM Press. Stefan Thater, Hagen F¨urstenau, and Manfred Pinkal. 2011. Word meaning in context: a simple and effective vector model. In Proceedings of the 5th International Joint Conference on Natural Language Processing, IJCNLP ’11. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 384–394, Stroudsburg, PA, USA. Association for Computational Linguistics. 882
2012
92
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 883–891, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics Exploiting Social Information in Grounded Language Learning via Grammatical Reductions Mark Johnson Department of Computing Macquarie University Sydney, Australia [email protected] Katherine Demuth Department of Linguistics Macquarie University Sydney, Australia [email protected] Michael Frank Department of Psychology Stanford University Stanford, California [email protected] Abstract This paper uses an unsupervised model of grounded language acquisition to study the role that social cues play in language acquisition. The input to the model consists of (orthographically transcribed) child-directed utterances accompanied by the set of objects present in the non-linguistic context. Each object is annotated by social cues, indicating e.g., whether the caregiver is looking at or touching the object. We show how to model the task of inferring which objects are being talked about (and which words refer to which objects) as standard grammatical inference, and describe PCFG-based unigram models and adaptor grammar-based collocation models for the task. Exploiting social cues improves the performance of all models. Our models learn the relative importance of each social cue jointly with word-object mappings and collocation structure, consistent with the idea that children could discover the importance of particular social information sources during word learning. 1 Introduction From learning sounds to learning the meanings of words, social interactions are extremely important for children’s early language acquisition (Baldwin, 1993; Kuhl et al., 2003). For example, children who engage in more joint attention (e.g. looking at particular objects together) with caregivers tend to learn words faster (Carpenter et al., 1998). Yet computational or formal models of social interaction are rare, and those that exist have rarely gone beyond the stage of cue-weighting models. In order to study the role that social cues play in language acquisition, this paper presents a structured statistical model of grounded learning that learns a mapping between words and objects from a corpus of child-directed utterances in a completely unsupervised fashion. It exploits five different social cues, which indicate which object (if any) the child is looking at, which object the child is touching, etc. Our models learn the salience of each social cue in establishing reference, relative to their co-occurrence with objects that are not being referred to. Thus, this work is consistent with a view of language acquisition in which children learn to learn, discovering organizing principles for how language is organized and used socially (Baldwin, 1993; Hollich et al., 2000; Smith et al., 2002). We reduce the grounded learning task to a grammatical inference problem (Johnson et al., 2010; B¨orschinger et al., 2011). The strings presented to our grammatical learner contain a prefix which encodes the objects and their social cues for each utterance, and the rules of the grammar encode relationships between these objects and specific words. These rules permit every object to map to every word (including function words; i.e., there is no “stop word” list), and the learning process decides which of these rules will have a non-trivial probability (these encode the object-word mappings the system has learned). This reduction of grounded learning to grammatical inference allows us to use standard grammatical inference procedures to learn our models. Here we use the adaptor grammar package described in Johnson et al. (2007) and Johnson and Goldwater (2009) with “out of the box” default settings; no parameter tuning whatsoever was done. Adaptor grammars are a framework for specifying hierarchical non-parametric models that has been previously used to model language acquisition (Johnson, 2008). 883 Social cue Value child.eyes objects child is looking at child.hands objects child is touching mom.eyes objects care-giver is looking at mom.hands objects care-giver is touching mom.point objects care-giver is pointing to Figure 1: The 5 social cues in the Frank et al. (to appear) corpus. The value of a social cue for an utterance is a subset of the available topics (i.e., the objects in the nonlinguistic context) of that utterance. A semanticist might argue that our view of referential mapping is flawed: full noun phrases (e.g., the dog), rather than nouns, refer to specific objects, and nouns denote properties (e.g., dog denotes the property of being a dog). Learning that a noun, e.g., dog, is part of a phrase used to refer to a specific dog (say, Fido) does not suffice to determine the noun’s meaning: the noun could denote a specific breed of dog, or animals in general. But learning word-object relationships is a plausible first step for any learner: it is often only the contrast between learned relationships and novel relationships that allows children to induce super- or sub-ordinate mappings (Clark, 1987). Nevertheless, in deference to such objections, we call the object that a phrase containing a given noun refers to the topic of that noun. (This is also appropriate, given that our models are specialisations of topic models). Our models are intended as an “ideal learner” approach to early social language learning, attempting to weight the importance of social and structural factors in the acquisition of word-object correspondences. From this perspective, the primary goal is to investigate the relationships between acquisition tasks (Johnson, 2008; Johnson et al., 2010), looking for synergies (areas of acquisition where attempting two learning tasks jointly can provide gains in both) as well as areas where information overlaps. 1.1 A training corpus for social cues Our work here uses a corpus of child-directed speech annotated with social cues, described in Frank et al. (to appear). The corpus consists of 4,763 orthographically-transcribed utterances of caregivers to their pre-linguistic children (ages 6, 12, and 18 months) during home visits where children played with a consistent set of toys. The sessions were video-taped, and each utterance was annotated with the five social cues described in Figure 1. Each utterance in the corpus contains the following information: • the sequence of orthographic words uttered by the care-giver, • a set of available topics (i.e., objects in the nonlinguistic objects), • the values of the social cues, and • a set of intended topics, which the care-giver refers to. Figure 2 presents this information for an example utterance. All of these but the intended topics are provided to our learning algorithms; the intended topics are used to evaluate the output produced by our learners. Generally the intended topics consist of zero or one elements from the available topics, but not always: it is possible for the caregiver to refer to two objects in a single utterance, or to refer to an object not in the current non-linguistic context (e.g., to a toy that has been put away). There is a considerable amount of anaphora in this corpus, which our models currently ignore. Frank et al. (to appear) give extensive details on the corpus, including inter-annotator reliability information for all annotations, and provide detailed statistical analyses of the relationships between the various social cues, the available topics and the intended topics. That paper also gives instructions on obtaining the corpus. 1.2 Previous work There is a growing body of work on the role of social cues in language acquisition. The language acquisition research community has long recognized the importance of social cues for child language acquisition (Baldwin, 1991; Carpenter et al., 1998; Kuhl et al., 2003). Siskind (1996) describes one of the first examples of a model that learns the relationship between words and topics, albeit in a non-statistical framework. Yu and Ballard (2007) describe an associative learner that associates words with topics and that exploits prosodic as well as social cues. The relative importance of the various social cues are specified a priori in their model (rather than learned, as they are here), and unfortunately their training corpus is not available. Frank et al. (2008) describes a Bayesian model that learns the relationship between words and topics, but the version of their model that included social cues presented a number of challenges for inference. The unigram model we describe below corresponds most closely to the Frank 884 .dog # .pig child.eyes mom.eyes mom.hands # ## wheres the piggie Figure 2: The photograph indicates non-linguistic context containing a (toy) pig and dog for the utterance Where’s the piggie?. Below that, we show the representation of this utterance that serves as the input to our models. The prefix (the portion of the string before the “##”) lists the available topics (i.e., the objects in the non-linguistic context) and their associated social cues (the cues for the pig are child.eyes, mom.eyes and mom.hands, while the dog is not associated with any social cues). The intended topic is the pig. The learner’s goals are to identify the utterance’s intended topic, and which words in the utterance are associated with which topic. Sentence Topic.pig T.None .dog NotTopical.child.eyes NotTopical.child.hands NotTopical.mom.eyes NotTopical.mom.hands NotTopical.mom.point # Topic.pig T.pig .pig Topical.child.eyes child.eyes Topical.child.hands Topical.mom.eyes Topical.mom.hands mom.hands Topical.mom.point # Topic.None ## Words.pig Word.None wheres Words.pig Word.None the Words.pig Word.pig piggie Figure 3: Sample parse generated by the Unigram PCFG. Nodes coloured red show how the “pig” topic is propagated from the prefix (before the “##” separator) into the utterance. The social cues associated with each object are generated either from a “Topical” or a “NotTopical” nonterminal, depending on whether the corresponding object is topical or not. 885 et al. model. Johnson et al. (2010) reduces grounded learning to grammatical inference for adaptor grammars and shows how it can be used to perform word segmentation as well as learning word-topic relationships, but their model does not take social cues into account. 2 Reducing grounded learning with social cues to grammatical inference This section explains how we reduce ground learning problems with social cues to grammatical inference problems, which lets us apply a wide variety of grammatical inference algorithms to grounded learning problems. An advantage of reducing grounded learning to grammatical inference is that it suggests new ways to generalise grounded learning models; we explore three such generalisations here. The main challenge in this reduction is finding a way of expressing the non-linguistic information as part of the strings that serve as the grammatical inference procedure’s input. Here we encode the nonlinguistic information in a “prefix” to each utterance as shown in Figure 2, and devise a grammar such that inference for the grammar corresponds to learning the word-topic relationships and the salience of the social cues for grounded learning. All our models associate each utterance with zero or one topics (this means we cannot correctly analyse utterances with more than one intended topic). We analyse an utterance associated with zero topics as having the special topic None, so we can assume that every utterance has exactly one topic. All our grammars generate strings of the form shown in Figure 2, and they do so by parsing the prefix and the words of the utterance separately; the top-level rules of the grammar force the same topic to be associated with both the prefix and the words of the utterance (see Figure 3). 2.1 Topic models and the unigram PCFG As Johnson et al. (2010) observe, this kind of grounded learning can be viewed as a specialised kind of topic inference in a topic model, where the utterance topic is constrained by the available objects (possible topics). We exploit this observation here using a reduction based on the reduction of LDA topic models to PCFGs proposed by Johnson (2010). This leads to our first model, the unigram grammar, which is a PCFG.1 1In fact, the unigram grammar is equivalent to a HMM, but the PCFG parameterisation makes clear the relationship Sentence →Topict Wordst ∀t ∈T ′ TopicNone →## Topict →Tt TopicNone ∀t ∈T ′ Topict →TNone Topict ∀t ∈T Tt →t Topicalc1 ∀t ∈T Topicalci →(ci) Topicalci+1 i = 1, . . . , ℓ−1 Topicalcℓ→(cℓ) # TNone →t NotTopicalc1 ∀t ∈T NotTopicalci →(ci) NotTopicalci+1 i = 1, . . . , ℓ−1 NotTopicalcℓ→(cℓ) # Wordst →WordNone (Wordst) ∀t ∈T ′ Wordst →Wordt (Wordst) ∀t ∈T Wordt →w ∀t ∈T ′, w ∈W Figure 4: The rule schema that generate the unigram PCFG. Here (c1, . . . , cℓ) is an ordered list of the social cues, T is the set of all non-None available topics, T ′ = T ∪{None}, and W is the set of words appearing in the utterances. Parentheses indicate optionality. Figure 4 presents the rules of the unigram grammar. This grammar has two major parts. The rules expanding the Topict nonterminals ensure that the social cues for the available topic t are parsed under the Topical nonterminals. All other available topics are parsed under TNone nonterminals, so their social cues are parsed under NotTopical nonterminals. The rules expanding these non-terminals are specifically designed so that the generation of the social cues corresponds to a series of binary decisions about each social cue. For example, the probability of the rule Topicalchild.eyes →.child.eyes Topicalchild.hands is the probability of an object that is an utterance topic occuring with the child.eyes social cue. By estimating the probabilities of these rules, the model effectively learns the probability of each social cue being associated with a Topical or a NotTopical available topic, respectively. The nonterminals Wordst expand to a sequence of Wordt and WordNone nonterminals, each of which can expand to any word whatsoever. In practice Wordt will expand to those words most strongly associated with topic t, while WordNone will expand to those words not associated with any topic. between grounded learning and estimation of grammar rule weights. 886 Sentence →Topict Collocst ∀t ∈T ′ Collocst →Colloct (Collocst) ∀t ∈T ′ Collocst →CollocNone (Collocst) ∀t ∈T Colloct →Wordst ∀t ∈T ′ Wordst →Wordt (Wordst) ∀t ∈T ′ Wordst →WordNone (Wordst) ∀t ∈T Wordt →Word ∀t ∈T ′ Word →w ∀w ∈W Figure 5: The rule schema that generate the collocation adaptor grammar. Adapted nonterminals are indicated via underlining. Here T is the set of all non-None available topics, T ′ = T ∪{None}, and W is the set of words appearing in the utterances. The rules expanding the Topict nonterminals are exactly as in unigram PCFG. 2.2 Adaptor grammars Our other grounded learning models are based on reductions of grounded learning to adaptor grammar inference problems. Adaptor grammars are a framework for stating a variety of Bayesian nonparametric models defined in terms of a hierarchy of Pitman-Yor Processes: see Johnson et al. (2007) for a formal description. Informally, an adaptor grammar is specified by a set of rules just as in a PCFG, plus a set of adapted nonterminals. The set of trees generated by an adaptor grammar is the same as the set of trees generated by a PCFG with the same rules, but the generative process differs. Nonadapted nonterminals in an adaptor grammar expand just as they do in a PCFG: the probability of choosing a rule is specified by its probability. However, the expansion of an adapted nonterminal depends on how it expanded in previous derivations. An adapted nonterminal can directly expand to a subtree with probability proportional to the number of times that subtree has been previously generated; it can also “back off” to expand using a grammar rule, just as in a PCFG, with probability proportional to a constant.2 Thus an adaptor grammar can be viewed as caching each tree generated by each adapted nonterminal, and regenerating it with probability proportional to the number of times it was previously generated (with some probability mass reserved to generate “new” trees). This enables adaptor gram2This is a description of Chinese Restaurant Processes, which are the predictive distributions for Dirichlet Processes. Our adaptor grammars are actually based on the more general Pitman-Yor Processes, as described in Johnson and Goldwater (2009). Sentence Topic.pig ... Collocs.pig Colloc.None Words.None Word.None Word wheres Collocs.pig Colloc.pig Words.pig Word.None Word the Words.pig Word.pig Word piggie Figure 6: Sample parse generated by the collocation adaptor grammar. The adapted nonterminals Colloct and Wordt are shown underlined; the subtrees they dominate are “cached” by the adaptor grammar. The prefix (not shown here) is parsed exactly as in the Unigram PCFG. mars to generalise over subtrees of arbitrary size. Generic software is available for adaptor grammar inference, based either on Variational Bayes (Cohen et al., 2010) or Markov Chain Monte Carlo (Johnson and Goldwater, 2009). We used the latter software because it is capable of performing hyper-parameter inference for the PCFG rule probabilities and the Pitman-Yor Process parameters. We used the “outof-the-box” settings for this software, i.e., uniform priors on all PCFG rule parameters, a Beta(2, 1) prior on the Pitman-Yor a parameters and a “vague” Gamma(100, 0.01) prior on the Pitman-Yor b parameters. (Presumably performance could be improved if the priors were tuned, but we did not explore this here). Here we explore a simple “collocation” extension to the unigram PCFG which associates multiword collocations, rather than individual words, with topics. Hardisty et al. (2010) showed that this significantly improved performance in a sentiment analysis task. The collocation adaptor grammar in Figure 5 generates the words of the utterance as a sequence of collocations, each of which is a sequence of words. Each collocation is either associated with the sentence topic or with the None topic, just like words in the unigram model. Figure 6 shows a sample parse generated by the collocation adaptor grammar. We also experimented with a variant of the unigram and collocation grammars in which the topicspecific word distributions Wordt for each t ∈T 887 Model Social Utterance topic Word topic Lexicon cues acc. f-score prec. rec. f-score prec. rec. f-score prec. rec. unigram none 0.3395 0.4044 0.3249 0.5353 0.2007 0.1207 0.5956 0.1037 0.05682 0.5952 unigram all 0.4907 0.6064 0.4867 0.8043 0.295 0.1763 0.9031 0.1483 0.08096 0.881 colloc none 0.4331 0.3513 0.3272 0.3792 0.2431 0.1603 0.5028 0.08808 0.04942 0.4048 colloc all 0.5837 0.598 0.5623 0.6384 0.4098 0.2702 0.8475 0.1671 0.09422 0.7381 unigram′ none 0.3261 0.3767 0.3054 0.4914 0.1893 0.1131 0.5811 0.1167 0.06583 0.5122 unigram′ all 0.5117 0.6106 0.4986 0.7875 0.2846 0.1693 0.891 0.1684 0.09402 0.8049 colloc′ none 0.5238 0.3419 0.3844 0.3078 0.2551 0.1732 0.4843 0.2162 0.1495 0.3902 colloc′ all 0.6492 0.6034 0.6664 0.5514 0.3981 0.2613 0.8354 0.3375 0.2269 0.6585 Figure 7: Utterance topic, word topic and lexicon results for all models, on data with and without social cues. The results for the variant models, in which Wordt nonterminals expand via WordNone, are shown under unigram′ and colloc′. Utterance topic shows how well the model discovered the intended topics at the utterance level, word topic shows how well the model associates word tokens with topics, and lexicon shows how well the topic most frequently associated with a word type matches an external word-topic dictionary. In this figure and below, “colloc” abbreviates “collocation”, “acc.” abbreviates “accuracy”, “prec.” abbreviates “precision” and “rec.” abbreviates “recall”. (the set of non-None available topics) expand via WordNone non-terminals. That is, in the variant grammars topical words are generated with the following rule schema: Wordt →WordNone ∀t ∈T WordNone →Word Word →w ∀w ∈W In these variant grammars, the WordNone nonterminal generates all the words of the language, so it defines a generic “background” distribution over all the words, rather than just the nontopical words. An effect of this is that the variant grammars tend to identify fewer words as topical. 3 Experimental evaluation We performed grammatical inference using the adaptor grammar software described in Johnson and Goldwater (2009).3 All experiments involved 4 runs of 5,000 samples each, of which the first 2,500 were discarded for “burn-in”.4 From these samples we extracted the modal (i.e., most frequent) analysis, 3Because adaptor grammars are a generalisation of PCFGs, we could use the adaptor grammar software to estimate the unigram model. 4We made no effort to optimise the computation, but it seems the samplers actually stabilised after around a hundred iterations, so it was probably not necessary to sample so extensively. We estimated the error in our results by running our most complex model (the colloc′ model with all social cues) 20 times (i.e., 20×8 chains for 5,000 iterations) so we could compute the variance of each of the evaluation scores (it is reasonable to assume that the simpler models will have smaller variance). The standard deviation of all utterance topic and word topic measures is between 0.005 and 0.01; the standard deviation for lexicon f-score is 0.02, lexicon precision is 0.01 and lexicon recall is 0.03. The adaptor grammar software uses a sentence-wise which we evaluated as described below. The results of evaluating each model on the corpus with social cues, and on another corpus identical except that the social cues have been removed, are presented in Figure 7. Each model was evaluated on each corpus as follows. First, we extracted the utterance’s topic from the modal parse (this can be read off the Topict nodes), and compared this to the intended topics annotated in the corpus. The frequency with which the models’ predicted topics exactly matches the intended topics is given under “utterance topic accuracy”; the f-score, precision and recall of each model’s topic predictions are also given in the table. Because our models all associate word tokens with topics, we can also evaluate the accuracy with which word tokens are associated with topics. We constructed a small dictionary which identifies the words that can be used as the head of a phrase to refer to the topical objects (e.g., the dictionary indicates that dog, doggie and puppy name the topical object DOG). Our dictionary is relatively conservative; between one and eight words are associated with each topic. We scored the topic label on each word token in our corpus as follows. A topic label is scored as correct if it is given in our dictionary and the topic is one of the intended topics for the utterance. The “word topic” entries in Figure 7 give the results of this evaluation. blocked sampler, so it requires fewer iterations than a pointwise sampler. We used 5,000 iterations because this is the software’s default setting; evaluating the trace output suggests it only takes several hundred iterations to “burn in”. However, we ran 8 chains for 25,000 iterations of the colloc′ model; as expected the results of this run are within two standard deviations of the results reported above. 888 Model Social Utterance topic Word topic Lexicon cues acc. f-score prec. rec. f-score prec. rec. f-score prec. rec. unigram none 0.3395 0.4044 0.3249 0.5353 0.2007 0.1207 0.5956 0.1037 0.05682 0.5952 unigram +child.eyes 0.4573 0.5725 0.4559 0.7694 0.2891 0.1724 0.8951 0.1362 0.07415 0.8333 unigram +child.hands 0.3399 0.4011 0.3246 0.5247 0.2008 0.121 0.5892 0.09705 0.05324 0.5476 unigram +mom.eyes 0.338 0.4023 0.3234 0.5322 0.1992 0.1198 0.5908 0.09664 0.053 0.5476 unigram +mom.hands 0.3563 0.4279 0.3437 0.5667 0.1984 0.1191 0.5948 0.09959 0.05455 0.5714 unigram +mom.point 0.3063 0.3548 0.285 0.4698 0.1806 0.1086 0.5359 0.09224 0.05057 0.5238 colloc none 0.4331 0.3513 0.3272 0.3792 0.2431 0.1603 0.5028 0.08808 0.04942 0.4048 colloc +child.eyes 0.5159 0.5006 0.4652 0.542 0.351 0.2309 0.7312 0.1432 0.07989 0.6905 colloc +child.hands 0.4827 0.4275 0.3999 0.4592 0.2897 0.1913 0.5964 0.1192 0.06686 0.5476 colloc +mom.eyes 0.4697 0.4171 0.3869 0.4525 0.2708 0.1781 0.5642 0.1013 0.05666 0.4762 colloc +mom.hands 0.4747 0.4251 0.3942 0.4612 0.274 0.1806 0.5666 0.09548 0.05337 0.4524 colloc +mom.point 0.4228 0.3378 0.3151 0.3639 0.2575 0.1716 0.5157 0.09278 0.05202 0.4286 Figure 8: Effect of using just one social cue on the experimental results for the unigram and collocation models. The “importance” of a social cue can be quantified by the degree to which the model’s evaluation score improves when using a corpus containing that social cue relative to its evaluation score when using a corpus without any social cues. The most important social cue is the one which causes performance to improve the most. Finally, we extracted a lexicon from the parsed corpus produced by each model. We counted how often each word type was associated with each topic in our sampler’s output (including the None topic), and assigned the word to its most frequent topic. The “lexicon” entries in Figure 7 show how well the entries in these lexicons match the entries in the manually-constructed dictionary discussed above. There are 10 different evaluation scores, and no model dominates in all of them. However, the topscoring result in every evaluation is always for a model trained using social cues, demonstrating the importance of these social cues. The variant collocation model (trained on data with social cues) was the top-scoring model on four evaluation scores, which is more than any other model. One striking thing about this evaluation is that the recall scores are all much higher than the precision scores, for each evaluation. This indicates that all of the models, especially the unigram model, are labelling too many words as topical. This is perhaps not too surprising: because our models completely lack any notion of syntactic structure and simply model the association between words and topics, they label many non-nouns with topics (e.g., woof is typically labelled with the topic DOG). 3.1 Evaluating the importance of social cues It is scientifically interesting to be able to evaluate the importance of each of the social cues to grounded learning. One way to do this is to study the effect of adding or removing social cues from the corpus on the ability of our models to perform grounded learning. An important social cue should have a large impact on our models’ performance; an unimportant cue should have little or no impact. Figure 8 compares the performance of the unigram and collocation models on corpora containing a single social cue to their performance on the corpus without any social cues, while Figure 9 compares the performance of these models on corpora containing all but one social cue to the corpus containing all of the social cues. In both of these evaluations, with respect to all 10 evaluation measures, the child.eyes social cue had the most impact on model performance. Why would the child’s own gaze be more important than the caregiver’s? Perhaps caregivers are following in, i.e., talking about objects that their children are interested in (Baldwin, 1991). However, another possible explanation is that this result is due to the general continuity of conversational topics over time. Frank et al. (to appear) show that for the current corpus, the topic of the preceding utterance is very likely to be the topic of the current one also. Thus, the child’s eyes might be a good predictor because they reflect the fact that the child’s attention has been drawn to an object by previous utterances. Notice that these two possible explanations of the importance of the child.eyes cue are diametrically opposed; the first explanation claims that the cue is important because the child is driving the discourse, while the second explanation claims that the cue is important because the child’s gaze follows the topic of the caregiver’s previous utterance. This sort of question about causal relationships in conversations may be very difficult to answer using standard descriptive techniques, but it may be an interesting av889 Model Social Utterance topic Word topic Lexicon cues acc. f-score prec. rec. f-score prec. rec. f-score prec. rec. unigram all 0.4907 0.6064 0.4867 0.8043 0.295 0.1763 0.9031 0.1483 0.08096 0.881 unigram −child.eyes 0.3836 0.4659 0.3738 0.6184 0.2149 0.1286 0.6546 0.1111 0.06089 0.6341 unigram −child.hands 0.4907 0.6063 0.4863 0.8051 0.296 0.1769 0.9056 0.1525 0.08353 0.878 unigram −mom.eyes 0.4799 0.5974 0.4768 0.7996 0.2898 0.1727 0.9007 0.1551 0.08486 0.9024 unigram −mom.hands 0.4871 0.5996 0.4815 0.7945 0.2925 0.1746 0.8991 0.1561 0.08545 0.9024 unigram −mom.point 0.4875 0.6033 0.4841 0.8004 0.2934 0.1752 0.9007 0.1558 0.08525 0.9024 colloc all 0.5837 0.598 0.5623 0.6384 0.4098 0.2702 0.8475 0.1671 0.09422 0.738 colloc −child.eyes 0.5604 0.5746 0.529 0.6286 0.39 0.2561 0.8176 0.1534 0.08642 0.6829 colloc −child.hands 0.5849 0.6 0.5609 0.6451 0.4145 0.273 0.8612 0.1662 0.09375 0.7317 colloc −mom.eyes 0.5709 0.5829 0.5457 0.6255 0.4036 0.2655 0.8418 0.1662 0.09375 0.7317 colloc −mom.hands 0.5795 0.5935 0.5571 0.6349 0.4038 0.2653 0.8442 0.1788 0.1009 0.7805 colloc −mom.point 0.5851 0.6006 0.5607 0.6467 0.4097 0.2685 0.8644 0.1742 0.09841 0.7561 Figure 9: Effect of using all but one social cue on the experimental results for the unigram and collocation models. The “importance” of a social cue can be quantified by the degree to which the model’s evaluation score degrades when that just social cue is removed from the corpus, relative to its evaluation score when using a corpus without all social cues. The most important social cue is the one which causes performance to degrade the most. enue for future investigation using more structured models such as those proposed here.5 4 Conclusion and future work This paper presented four different grounded learning models that exploit social cues. These models are all expressed via reductions to grammatical inference problems, so standard “off the shelf” grammatical inference tools can be used to learn them. Here we used the same adaptor grammar software tools to learn all these models, so we can be relatively certain that any differences we observe are due to differences in the models, rather than quirks in the software. Because the adaptor grammar software performs full Bayesian inference, including for model parameters, an unusual feature of our models is that we did not need to perform any parameter tuning whatsoever. This feature is particularly interesting with respect to the parameters on social cues. Psychological proposals have suggested that children may discover that particular social cues help in establishing reference (Baldwin, 1993; Hollich et al., 2000), but prior modeling work has often assumed that cues, cue weights, or both are prespecified. In contrast, the models described here could in principle discover a wide range of different social conventions. 5A reviewer suggested that we can test whether child.eyes effectively provides the same information as the previous topic by adding the previous topic as a (pseudo-) social cue. We tried this, and child.eyes and previous.topic do in fact seem to convey very similar information: e.g., the model with previous.topic and without child.eyes scores essentially the same as the model with all social cues. Our work instantiates the strategy of investigating the structure of children’s learning environment using “ideal learner” models. We used our models to investigate scientific questions about the role of social cues in grounded language learning. Because the performance of all four models studied in this paper improve dramatically when provided with social cues in all ten evaluation metrics, this paper provides strong support for the view that social cues are a crucial information source for grounded language learning. We also showed that the importance of the different social cues in grounded language learning can be evaluated using “add one cue” and “subtract one cue” methodologies. According to both of these, the child.eyes cue is the most important of the five social cues studied here. There are at least two possible reasons for this: the caregiver’s topic could be determined by the child’s gaze, or the child.eyes cue could be providing our models with information about the topic of the previous utterance. Incorporating topic continuity and anaphoric dependencies into our models would be likely to improve performance. This improvement might also help us distinguish the two hypotheses about the child.eyes cue. If the child.eyes cue is just providing indirect information about topic continuity, then the importance of the child.eyes cue should decrease when we incorporate topic continuity into our models. But if the child’s gaze is in fact determining the care-giver’s topic, then child.eyes should remain a strong cue even when anaphoric dependencies and topic continuity are incorporated into our models. 890 Acknowledgements This research was supported under the Australian Research Council’s Discovery Projects funding scheme (project number DP110102506). References Dare A. Baldwin. 1991. Infants’ contribution to the achievement of joint reference. Child Development, 62(5):874–890. Dare A. Baldwin. 1993. Infants’ ability to consult the speaker for clues to word reference. Journal of Child Language, 20:395–395. Benjamin B¨orschinger, Bevan K. Jones, and Mark Johnson. 2011. Reducing grounded learning tasks to grammatical inference. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1416–1425, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. M. Carpenter, K. Nagell, M. Tomasello, G. Butterworth, and C. Moore. 1998. Social cognition, joint attention, and communicative competence from 9 to 15 months of age. Monographs of the society for research in child development. E.V. Clark. 1987. The principle of contrast: A constraint on language acquisition. Mechanisms of language acquisition, 1:33. Shay B. Cohen, David M. Blei, and Noah A. Smith. 2010. Variational inference for adaptor grammars. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 564– 572, Los Angeles, California, June. Association for Computational Linguistics. Michael Frank, Noah Goodman, and Joshua Tenenbaum. 2008. A Bayesian framework for cross-situational word-learning. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 457–464, Cambridge, MA. MIT Press. Michael C. Frank, Joshua Tenenbaum, and Anne Fernald. to appear. Social and discourse contributions to the determination of reference in cross-situational word learning. Language, Learning, and Development. Eric A. Hardisty, Jordan Boyd-Graber, and Philip Resnik. 2010. Modeling perspective using adaptor grammars. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 284– 292, Stroudsburg, PA, USA. Association for Computational Linguistics. G.J. Hollich, K. Hirsh-Pasek, and R. Golinkoff. 2000. Breaking the language barrier: An emergentist coalition model for the origins of word learning. Monographs of the Society for Research in Child Development. Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317–325, Boulder, Colorado, June. Association for Computational Linguistics. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641–648. MIT Press, Cambridge, MA. Mark Johnson, Katherine Demuth, Michael Frank, and Bevan Jones. 2010. Synergies in learning words and their referents. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1018–1026. Mark Johnson. 2008. Using adaptor grammars to identifying synergies in the unsupervised acquisition of linguistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, pages 398–406, Columbus, Ohio. Association for Computational Linguistics. Mark Johnson. 2010. PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1148–1157, Uppsala, Sweden, July. Association for Computational Linguistics. Patricia K. Kuhl, Feng-Ming Tsao, and Huei-Mei Liu. 2003. Foreign-language experience in infancy: Effects of short-term exposure and social interaction on phonetic learning. Proceedings of the National Academy of Sciences USA, 100(15):9096–9101. Jeffrey Siskind. 1996. A computational study of crosssituational techniques for learning word-to-meaning mappings. Cognition, 61(1-2):39–91. L.B. Smith, S.S. Jones, B. Landau, L. Gershkoff-Stowe, and L. Samuelson. 2002. Object name learning provides on-the-job training for attention. Psychological Science, 13(1):13. Chen Yu and Dana H Ballard. 2007. A unified model of early word learning: Integrating statistical and social cues. Neurocomputing, 70(13-15):2149–2165. 891
2012
93
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 892–901, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics You Had Me at Hello: How Phrasing Affects Memorability Cristian Danescu-Niculescu-Mizil Justin Cheng Jon Kleinberg Lillian Lee Department of Computer Science Cornell University [email protected], [email protected], [email protected], [email protected] Abstract Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased — the choice of words and sentence structure — can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts — that is, more portable. We also show how the concept of “memorable language” can be extended across domains. 1 Hello. My name is Inigo Montoya. Understanding what items will be retained in the public consciousness, and why, is a question of fundamental interest in many domains, including marketing, politics, entertainment, and social media; as we all know, many items barely register, whereas others catch on and take hold in many people’s minds. An active line of recent computational work has employed a variety of perspectives on this question. Building on a foundation in the sociology of diffusion [27, 31], researchers have explored the ways in which network structure affects the way information spreads, with domains of interest including blogs [1, 11], email [37], on-line commerce [22], and social media [2, 28, 33, 38]. There has also been recent research addressing temporal aspects of how different media sources convey information [23, 30, 39] and ways in which people react differently to information on different topics [28, 36]. Beyond all these factors, however, one’s everyday experience with these domains suggests that the way in which a piece of information is expressed — the choice of words, the way it is phrased — might also have a fundamental effect on the extent to which it takes hold in people’s minds. Concepts that attain wide reach are often carried in messages such as political slogans, marketing phrases, or aphorisms whose language seems intuitively to be memorable, “catchy,” or otherwise compelling. Our first challenge in exploring this hypothesis is to develop a notion of “successful” language that is precise enough to allow for quantitative evaluation. We also face the challenge of devising an evaluation setting that separates the phrasing of a message from the conditions in which it was delivered — highlycited quotes tend to have been delivered under compelling circumstances or fit an existing cultural, political, or social narrative, and potentially what appeals to us about the quote is really just its invocation of these extra-linguistic contexts. Is the form of the language adding an effect beyond or independent of these (obviously very crucial) factors? To investigate the question, one needs a way of control892 ling — as much as possible — for the role that the surrounding context of the language plays. The present work (i): Evaluating language-based memorability Defining what makes an utterance memorable is subtle, and scholars in several domains have written about this question. There is a rough consensus that an appropriate definition involves elements of both recognition — people should be able to retain the quote and recognize it when they hear it invoked — and production — people should be motivated to refer to it in relevant situations [15]. One suggested reason for why some memes succeed is their ability to provoke emotions [16]. Alternatively, memorable quotes can be good for expressing the feelings, mood, or situation of an individual, a group, or a culture (the zeitgeist): “Certain quotes exquisitely capture the mood or feeling we wish to communicate to someone. We hear them ... and store them away for future use” [10]. None of these observations, however, serve as definitions, and indeed, we believe it desirable to not pre-commit to an abstract definition, but rather to adopt an operational formulation based on external human judgments. In designing our study, we focus on a domain in which (i) there is rich use of language, some of which has achieved deep cultural penetration; (ii) there already exist a large number of external human judgments — perhaps implicit, but in a form we can extract; and (iii) we can control for the setting in which the text was used. Specifically, we use the complete scripts of roughly 1000 movies, representing diverse genres, eras, and levels of popularity, and consider which lines are the most “memorable”. To acquire memorability labels, for each sentence in each script, we determine whether it has been listed as a “memorable quote” by users of the widely-known IMDb (the Internet Movie Database), and also estimate the number of times it appears on the Web. Both of these serve as memorability metrics for our purposes. When we evaluate properties of memorable quotes, we compare them with quotes that are not assessed as memorable, but were spoken by the same character, at approximately the same point in the same movie. This enables us to control in a fairly fine-grained way for the confounding effects of context discussed above: we can observe differences that persist even after taking into account both the speaker and the setting. In a pilot validation study, we find that human subjects are effective at recognizing the more IMDbmemorable of two quotes, even for movies they have not seen. This motivates a search for features intrinsic to the text of quotes that signal memorability. In fact, comments provided by the human subjects as part of the task suggested two basic forms that such textual signals could take: subjects felt that (i) memorable quotes often involve a distinctive turn of phrase; and (ii) memorable quotes tend to invoke general themes that aren’t tied to the specific setting they came from, and hence can be more easily invoked for future (out of context) uses. We test both of these principles in our analysis of the data. The present work (ii): What distinguishes memorable quotes Under the controlled-comparison setting sketched above, we find that memorable quotes exhibit significant differences from nonmemorable quotes in several fundamental respects, and these differences in the data reinforce the two main principles from the human pilot study. First, we show a concrete sense in which memorable quotes are indeed distinctive: with respect to lexical language models trained on the newswire portions of the Brown corpus [21], memorable quotes have significantly lower likelihood than their nonmemorable counterparts. Interestingly, this distinctiveness takes place at the level of words, but not at the level of other syntactic features: the part-ofspeech composition of memorable quotes is in fact more likely with respect to newswire. Thus, we can think of memorable quotes as consisting, in an aggregate sense, of unusual word choices built on a scaffolding of common part-of-speech patterns. We also identify a number of ways in which memorable quotes convey greater generality. In their patterns of verb tenses, personal pronouns, and determiners, memorable quotes are structured so as to be more “free-standing,” containing fewer markers that indicate references to nearby text. Memorable quotes differ in other interesting aspects as well, such as sound distributions. Our analysis of memorable movie quotes suggests a framework by which the memorability of text in a range of different domains could be investigated. 893 We provide evidence that such cross-domain properties may hold, guided by one of our motivating applications in marketing. In particular, we analyze a corpus of advertising slogans, and we show that these slogans have significantly greater likelihood at both the word level and the part-of-speech level with respect to a language model trained on memorable movie quotes, compared to a corresponding language model trained on non-memorable movie quotes. This suggests that some of the principles underlying memorable text have the potential to apply across different areas. Roadmap §2 lays the empirical foundations of our work: the design and creation of our movie-quotes dataset, which we make publicly available (§2.1), a pilot study with human subjects validating IMDbbased memorability labels (§2.2), and further study of incorporating search-engine counts (§2.3). §3 details our analysis and prediction experiments, using both movie-quotes data and, as an exploration of cross-domain applicability, slogans data. §4 surveys related work across a variety of fields. §5 briefly summarizes and indicates some future directions. 2 I’m ready for my close-up. 2.1 Data To study the properties of memorable movie quotes, we need a source of movie lines and a designation of memorability. Following [8], we constructed a corpus consisting of all lines from roughly 1000 movies, varying in genre, era, and popularity; for each movie, we then extracted the list of quotes from IMDb’s Memorable Quotes page corresponding to the movie.1 A memorable quote in IMDb can appear either as an individual sentence spoken by one character, or as a multi-sentence line, or as a block of dialogue involving multiple characters. In the latter two cases, it can be hard to determine which particular portion is viewed as memorable (some involve a build-up to a punch line; others involve the follow-through after a well-phrased opening sentence), and so we focus in our comparisons on those memorable quotes that 1This extraction involved some edit-distance-based alignment, since the exact form of the line in the script can exhibit minor differences from the version typed into IMDb. 1 2 3 4 5 6 7 8 9 10 Decile 0 100 200 300 400 500 600 700 800 Number of memorable quotes Figure 1: Location of memorable quotes in each decile of movie scripts (the first 10th, the second 10th, etc.), summed over all movies. The same qualitative results hold if we discard each movie’s very first and last line, which might have privileged status. appear as a single sentence rather than a multi-line block.2 We now formulate a task that we can use to evaluate the features of memorable quotes. Recall that our goal is to identify effects based in the language of the quotes themselves, beyond any factors arising from the speaker or context. Thus, for each (singlesentence) memorable quote M, we identify a nonmemorable quote that is as similar as possible to M in all characteristics but the choice of words. This means we want it to be spoken by the same character in the same movie. It also means that we want it to have the same length: controlling for length is important because we expect that on average, shorter quotes will be easier to remember than long quotes, and that wouldn’t be an interesting textual effect to report. Moreover, we also want to control for the fact that a quote’s position in a movie can affect memorability: certain scenes produce more memorable dialogue, and as Figure 1 demonstrates, in aggregate memorable quotes also occur disproportionately near the beginnings and especially the ends of movies. In summary, then, for each M, we pick a contrasting (single-sentence) quote N from the same movie that is as close in the script as possible to M (either before or after it), subject to the conditions that (i) M and N are uttered by the same speaker, (ii) M and N have the same number of words, and (iii) N does not occur in the IMDb list of memorable 2We also ran experiments relaxing the single-sentence assumption, which allows for stricter scene control and a larger dataset but complicates comparisons involving syntax. The non-syntax results were in line with those reported here. 894 Movie First Quote Second Quote Jackie Brown Half a million dollars will always be missed. I know the type, trust me on this. Star Trek: Nemesis I think it’s time to try some unsafe velocities. No cold feet, or any other parts of our anatomy. Ordinary People A little advice about feelings kiddo; don’t expect it always to tickle. I mean there’s someone besides your mother you’ve got to forgive. Table 1: Three example pairs of movie quotes. Each pair satisfies our criteria: the two component quotes are spoken close together in the movie by the same character, have the same length, and one is labeled memorable by the IMDb while the other is not. (Contractions such as “it’s” count as two words.) quotes for the movie (either as a single line or as part of a larger block). Given such pairs, we formulate a pairwise comparison task: given M and N, determine which is the memorable quote. Psychological research on subjective evaluation [35], as well as initial experiments using ourselves as subjects, indicated that this pairwise set-up easier to work with than simply presenting a single sentence and asking whether it is memorable or not; the latter requires agreement on an “absolute” criterion for memorability that is very hard to impose consistently, whereas the former simply requires a judgment that one quote is more memorable than another. Our main dataset, available at http://www.cs. cornell.edu/∼cristian/memorability.html,3 thus consists of approximately 2200 such (M, N) pairs, separated by a median of 5 same-character lines in the script. The reader can get a sense for the nature of the data from the three examples in Table 1. We now discuss two further aspects to the formulation of the experiment: a preliminary pilot study involving human subjects, and the incorporation of search engine counts into the data. 2.2 Pilot study: Human performance As a preliminary consideration, we did a small pilot study to see if humans can distinguish memorable from non-memorable quotes, assuming our IMDBinduced labels as gold standard. Six subjects, all native speakers of English and none an author of this paper, were presented with 11 or 12 pairs of memorable vs. non-memorable quotes; again, we controlled for extra-textual effects by ensuring that in each pair the two quotes come from the same movie, are by the same character, have the same length, and 3Also available there: other examples and factoids. subject number of matches with IMDb-induced annotation A 11/11 = 100% B 11/12 = 92% C 9/11 = 82% D 8/11 = 73% E 7/11 = 64% F 7/12 = 58% macro avg — 78% Table 2: Human pilot study: number of matches to IMDb-induced annotation, ordered by decreasing match percentage. For the null hypothesis of random guessing, these results are statistically significant, p < 2−6 ≈.016. appear as nearly as possible in the same scene.4 The order of quotes within pairs was randomized. Importantly, because we wanted to understand whether the language of the quotes by itself contains signals about memorability, we chose quotes from movies that the subjects said they had not seen. (This means that each subject saw a different set of quotes.) Moreover, the subjects were requested not to consult any external sources of information.5 The reader is welcome to try a demo version of the task at http: //www.cs.cornell.edu/∼cristian/memorability.html. Table 2 shows that all the subjects performed (sometimes much) better than chance, and against the null hypothesis that all subjects are guessing randomly, the results are statistically significant, p < 2−6 ≈.016. These preliminary findings provide evidence for the validity of our task: despite the apparent difficulty of the job, even humans who haven’t seen the movie in question can recover our IMDb4In this pilot study, we allowed multi-sentence quotes. 5We did not use crowd-sourcing because we saw no way to ensure that this condition would be obeyed by arbitrary subjects. We do note, though, that after our research was completed and as of Apr. 26, 2012, ≈11,300 people completed the online test: average accuracy: 72%, mode number correct: 9/12. 895 induced labels with some reliability.6 2.3 Incorporating search engine counts Thus far we have discussed a dataset in which memorability is determined through an explicit labeling drawn from the IMDb. Given the “production” aspect of memorability discussed in §1, we should also expect that memorable quotes will tend to appear more extensively on Web pages than nonmemorable quotes; note that incorporating this insight makes it possible to use the (implicit) judgments of a much larger number of people than are represented by the IMDb database. It therefore makes sense to try using search-engine result counts as a second indication of memorability. We experimented with several ways of constructing memorability information from search-engine counts, but this proved challenging. Searching for a quote as a stand-alone phrase runs into the problem that a number of quotes are also sentences that people use without the movie in mind, and so high counts for such quotes do not testify to the phrase’s status as a memorable quote from the movie. On the other hand, searching for the quote in a Boolean conjunction with the movie’s title discards most of these uses, but also eliminates a large fraction of the appearances on the Web that we want to find: precisely because memorable quotes tend to have widespread cultural usage, people generally don’t feel the need to include the movie’s title when invoking them. Finally, since we are dealing with roughly 1000 movies, the result counts vary over an enormous range, from recent blockbusters to movies with relatively small fan bases. In the end, we found that it was more effective to use the result counts in conjunction with the IMDb labels, so that the counts played the role of an additional filter rather than a free-standing numerical value. Thus, for each pair (M, N) produced using the IMDb methodology above, we searched for each of M and N as quoted expressions in a Boolean conjunction with the title of the movie. We then kept only those pairs for which M (i) produced more than five results in our (quoted, conjoined) search, and (ii) produced at least twice as many results as the cor6The average accuracy being below 100% reinforces that context is very important, too. responding search for N. We created a version of this filtered dataset using each of Google and Bing, and all the main findings were consistent with the results on the IMDb-only dataset. Thus, in what follows, we will focus on the main IMDb-only dataset, discussing the relationship to the dataset filtered by search engine counts where relevant (in which case we will refer to the +Google dataset). 3 Never send a human to do a machine’s job. We now discuss experiments that investigate the hypotheses discussed in §1. In particular, we devise methods that can assess the distinctiveness and generality hypotheses and test whether there exists a notion of “memorable language” that operates across domains. In addition, we evaluate and compare the predictive power of these hypotheses. 3.1 Distinctiveness One of the hypotheses we examine is whether the use of language in memorable quotes is to some extent unusual. In order to quantify the level of distinctiveness of a quote, we take a language-model approach: we model “common language” using the newswire sections of the Brown corpus [21]7, and evaluate how distinctive a quote is by evaluating its likelihood with respect to this model — the lower the likelihood, the more distinctive. In order to assess different levels of lexical and syntactic distinctiveness, we employ a total of six Laplacesmoothed8 language models: 1-gram, 2-gram, and 3-gram word LMs and 1-gram, 2-gram and 3-gram part-of-speech9 LMs. We find strong evidence that from a lexical perspective, memorable quotes are more distinctive than their non-memorable counterparts. As indicated in Table 3, for each of our lexical “common language” models, in about 60% of the quote pairs, the memorable quote is more distinctive. Interestingly, the reverse is true when it comes to 7Results were qualitatively similar if we used the fiction portions. The age of the Brown corpus makes it less likely to contain modern movie quotes. 8We employ Laplace (additive) smoothing with a smoothing parameter of 0.2. The language models’ vocabulary was that of the entire training corpus. 9Throughout we obtain part-of-speech tags by using the NLTK maximum entropy tagger with default parameters. 896 “common language” model IMDb-only +Google lexical 1-gram 61.13%∗∗∗ 59.21%∗∗∗ 2-gram 59.22%∗∗∗ 57.03%∗∗∗ 3-gram 59.81%∗∗∗ 58.32%∗∗∗ syntactic 1-gram 43.60%∗∗∗ 44.77%∗∗∗ 2-gram 48.31% 47.84% 3-gram 50.91% 50.92% Table 3: Distinctiveness: percentage of quote pairs in which the the memorable quote is more distinctive than the non-memorable one according to the respective “common language” model. Significance according to a two-tailed sign test is indicated using *-notation (∗∗∗=“p<.001”). syntax: memorable quotes appear to follow the syntactic patterns of “common language” as closely as or more closely than non-memorable quotes. Together, these results suggest that memorable quotes consist of unusual word sequences built on common syntactic scaffolding. 3.2 Generality Another of our hypotheses is that memorable quotes are easier to use outside the specific context in which they were uttered — that is, more “portable” — and therefore exhibit fewer terms that refer to those settings. We use the following syntactic properties as proxies for the generality of a quote: • Fewer 3rd-person pronouns, since these commonly refer to a person or object that was introduced earlier in the discourse. Utterances that employ fewer such pronouns are easier to adapt to new contexts, and so will be considered more general. • More indefinite articles like a and an, since they are more likely to refer to general concepts than definite articles. Quotes with more indefinite articles will be considered more general. • Fewer past tense verbs and more present tense verbs, since the former are more likely to refer to specific previous events. Therefore utterances that employ fewer past tense verbs (and more present tense verbs) will be considered more general. Table 4 gives the results for each of these four metrics — in each case, we show the percentage of Generality metric IMDb-only +Google fewer 3rd pers. pronouns 64.37%∗∗∗ 62.93%∗∗∗ more indef. article 57.21%∗∗∗ 58.23%∗∗∗ less past tense 57.91%∗∗∗ 59.74%∗∗∗ more present tense 54.60%∗∗∗ 55.86%∗∗∗ Table 4: Generality: percentage of quote pairs in which the memorable quote is more general than the nonmemorable ones according to the respective metric. Pairs where the metric does not distinguish between the quotes are not considered. quote pairs for which the memorable quote scores better on the generality metric. Note that because the issue of generality is a complex one for which there is no straightforward single metric, our approach here is based on several proxies for generality, considered independently; yet, as the results show, all of these point in a consistent direction. It is an interesting open question to develop richer ways of assessing whether a quote has greater generality, in the sense that people intuitively attribute to memorable quotes. 3.3 “Memorable” language beyond movies One of the motivating questions in our analysis is whether there are general principles underlying “memorable language.” The results thus far suggest potential families of such principles. A further question in this direction is whether the notion of memorability can be extended across different domains, and for this we collected (and distribute on our website) 431 phrases that were explicitly designed to be memorable: advertising slogans (e.g., “Quality never goes out of style.”). The focus on slogans is also in keeping with one of the initial motivations in studying memorability, namely, marketing applications — in other words, assessing whether a proposed slogan has features that are consistent with memorable text. The fact that it’s not clear how to construct a collection of “non-memorable” counterparts to slogans appears to pose a technical challenge. However, we can still use a language-modeling approach to assess whether the textual properties of the slogans are closer to the memorable movie quotes (as one would conjecture) or to the non-memorable movie quotes. Specifically, we train one language model on memorable quotes and another on non-memorable quotes 897 (Non)memorable language models Slogans Newswire lexical 1-gram 56.15%∗∗ 33.77%∗∗∗ 2-gram 51.51% 25.15%∗∗∗ 3-gram 52.44% 28.89%∗∗∗ syntactic 1-gram 73.09%∗∗∗ 68.27%∗∗∗ 2-gram 64.04%∗∗∗ 50.21% 3-gram 62.88%∗∗∗ 55.09%∗∗∗ Table 5: Cross-domain concept of “memorable” language: percentage of slogans that have higher likelihood under the memorable language model than under the nonmemorable one (for each of the six language models considered). Rightmost column: for reference, the percentage of newswire sentences that have higher likelihood under the memorable language model than under the nonmemorable one. Generality metric slogans mem. n-mem. % 3rd pers. pronouns 2.14% 2.16% 3.41% % indefinite articles 2.68% 2.63% 2.06% % past tense 14.60% 21.13% 26.69% Table 6: Slogans are most general when compared to memorable and non-memorable quotes. (%s of 3rd pers. pronouns and indefinite articles are relative to all tokens, %s of past tense are relative to all past and present verbs.) and compare how likely each slogan is to be produced according to these two models. As shown in the middle column of Table 5, we find that slogans are better predicted both lexically and syntactically by the former model. This result thus offers evidence for a concept of “memorable language” that can be applied beyond a single domain. We also note that the higher likelihood of slogans under a “memorable language” model is not simply occurring for the trivial reason that this model predicts all other large bodies of text better. In particular, the newswire section of the Brown corpus is predicted better at the lexical level by the language model trained on non-memorable quotes. Finally, Table 6 shows that slogans employ general language, in the sense that for each of our generality metrics, we see a slogans/memorablequotes/non-memorable quotes spectrum. 3.4 Prediction task We now show how the principles discussed above can provide features for a basic prediction task, corresponding to the task in our human pilot study: given a pair of quotes, identify the memorable one. Our first formulation of the prediction task uses a standard bag-of-words model10. If there were no information in the textual content of a quote to determine whether it were memorable, then an SVM employing bag-of-words features should perform no better than chance. Instead, though, it obtains 59.67% (10-fold cross-validation) accuracy, as shown in Table 7. We then develop models using features based on the measures formulated earlier in this section: generality measures (the four listed in Table 4); distinctiveness measures (likelihood according to 1, 2, and 3-gram “common language” models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them); and similarityto-slogans measures (likelihood according to 1, 2, and 3-gram slogan-language models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them). Even a relatively small number of distinctiveness features, on their own, improve significantly over the much larger bag-of-words model. When we include additional features based on generality and language-model features measuring similarity to slogans, the performance improves further (last line of Table 7). Thus, the main conclusion from these prediction tasks is that abstracting notions such as distinctiveness and generality can produce relatively streamlined models that outperform much heavier-weight bag-of-words models, and can suggest steps toward approaching the performance of human judges who — very much unlike our system — have the full cultural context in which movies occur at their disposal. 3.5 Other characteristics We also made some auxiliary observations that may be of interest. Specifically, we find differences in letter and sound distribution (e.g., memorable quotes — after curse-word removal — use significantly more “front sounds” (labials or front vowels such as represented by the letter i) and significantly fewer “back sounds” such as the one represented by u),11 10We discarded terms appearing fewer than 10 times. 11These findings may relate to marketing research on sound symbolism [7, 19, 40]. 898 Feature set # feats Accuracy bag of words 962 59.67% distinctiveness 24 62.05%∗ generality 4 56.70% slogan sim. 24 58.30% all three types together 52 64.27%∗∗ Table 7: Prediction: SVM 10-fold cross validation results using the respective feature sets. Random baseline accuracy is 50%. Accuracies statistically significantly greater than bag-of-words according to a two-tailed t-test are indicated with *(p<.05) and **(p<.01). word complexity (e.g., memorable quotes use words with significantly more syllables) and phrase complexity (e.g., memorable quotes use fewer coordinating conjunctions). The latter two are in line with our distinctiveness hypothesis. 4 A long time ago, in a galaxy far, far away How an item’s linguistic form affects the reaction it generates has been studied in several contexts, including evaluations of product reviews [9], political speeches [12], on-line posts [13], scientific papers [14], and retweeting of Twitter posts [36]. We use a different set of features, abstracting the notions of distinctiveness and generality, in order to focus on these higher-level aspects of phrasing rather than on particular lower-level features. Related to our interest in distinctiveness, work in advertising research has studied the effect of syntactic complexity on recognition and recall of slogans [5, 6, 24]. There may also be connections to Von Restorff’s isolation effect Hunt [17], which asserts that when all but one item in a list are similar in some way, memory for the different item is enhanced. Related to our interest in generality, Knapp et al. [20] surveyed subjects regarding memorable messages or pieces of advice they had received, finding that the ability to be applied to multiple concrete situations was an important factor. Memorability, although distinct from “memorizability”, relates to short- and long-term recall. Thorn and Page [34] survey sub-lexical, lexical, and semantic attributes affecting short-term memorability of lexical items. Studies of verbatim recall have also considered the task of distinguishing an exact quote from close paraphrases [3]. Investigations of longterm recall have included studies of culturally significant passages of text [29] and findings regarding the effect of rhetorical devices of alliterative [4], “rhythmic, poetic, and thematic constraints” [18, 26]. Finally, there are complex connections between humor and memory [32], which may lead to interactions with computational humor recognition [25]. 5 I think this is the beginning of a beautiful friendship. Motivated by the broad question of what kinds of information achieve widespread public awareness, we studied the the effect of phrasing on a quote’s memorability. A challenge is that quotes differ not only in how they are worded, but also in who said them and under what circumstances; to deal with this difficulty, we constructed a controlled corpus of movie quotes in which lines deemed memorable are paired with non-memorable lines spoken by the same character at approximately the same point in the same movie. After controlling for context and situation, memorable quotes were still found to exhibit, on average (there will always be individual exceptions), significant differences from non-memorable quotes in several important respects, including measures capturing distinctiveness and generality. Our experiments with slogans show how the principles we identify can extend to a different domain. Future work may lead to applications in marketing, advertising and education [4]. Moreover, the subtle nature of memorability, and its connection to research in psychology, suggests a range of further research directions. We believe that the framework developed here can serve as the basis for further computational studies of the process by which information takes hold in the public consciousness, and the role that language effects play in this process. My mother thanks you. My father thanks you. My sister thanks you. And I thank you: Rebecca Hwa, Evie Kleinberg, Diana Minculescu, Alex Niculescu-Mizil, Jennifer Smith, Benjamin Zimmer, and the anonymous reviewers for helpful discussions and comments; our annotators Steven An, Lars Backstrom, Eric Baumer, Jeff Chadwick, Evie Kleinberg, and Myle Ott; and the makers of Cepacol, Robitussin, and Sudafed, whose products got us through the submission deadline. This paper is based upon work supported in part by NSF grants IIS-0910664, IIS-1016099, Google, and Yahoo! 899 References [1] Eytan Adar, Li Zhang, Lada A. Adamic, and Rajan M. Lukose. Implicit structure and the dynamics of blogspace. In Workshop on the Weblogging Ecosystem, 2004. [2] Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in large social networks: Membership, growth, and evolution. In Proceedings of KDD, 2006. [3] Elizabeth Bates, Walter Kintsch, Charles R. Fletcher, and Vittoria Giuliani. The role of pronominalization and ellipsis in texts: Some memory experiments. Journal of Experimental Psychology: Human Learning and Memory, 6 (6):676–691, 1980. [4] Frank Boers and Seth Lindstromberg. Finding ways to make phrase-learning feasible: The mnemonic effect of alliteration. System, 33(2): 225–238, 2005. [5] Samuel D. Bradley and Robert Meeds. Surface-structure transformations and advertising slogans: The case for moderate syntactic complexity. Psychology and Marketing, 19: 595–619, 2002. [6] Robert Chamblee, Robert Gilmore, Gloria Thomas, and Gary Soldow. When copy complexity can help ad readership. Journal of Advertising Research, 33(3):23–23, 1993. [7] John Colapinto. Famous names. The New Yorker, pages 38–43, 2011. [8] Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2011. [9] Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, and Lillian Lee. How opinions are received by online communities: A case study on Amazon.com helpfulness votes. In Proceedings of WWW, pages 141–150, 2009. [10] Stuart Fischoff, Esmeralda Cardenas, Angela Hernandez, Korey Wyatt, Jared Young, and Rachel Gordon. Popular movie quotes: Reflections of a people and a culture. In Annual Convention of the American Psychological Association, 2000. [11] Daniel Gruhl, R. Guha, David Liben-Nowell, and Andrew Tomkins. Information diffusion through blogspace. Proceedings of WWW, pages 491–501, 2004. [12] Marco Guerini, Carlo Strapparava, and Oliviero Stock. Trusting politicians’ words (for persuasive NLP). In Proceedings of CICLing, pages 263–274, 2008. [13] Marco Guerini, Carlo Strapparava, and G¨ozde ¨Ozbal. Exploring text virality in social networks. In Proceedings of ICWSM (poster), 2011. [14] Marco Guerini, Alberto Pepe, and Bruno Lepri. Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of ICWSM, 2012. [15] Richard Jackson Harris, Abigail J. Werth, Kyle E. Bures, and Chelsea M. Bartel. Social movie quoting: What, why, and how? Ciencias Psicologicas, 2(1):35–45, 2008. [16] Chip Heath, Chris Bell, and Emily Steinberg. Emotional selection in memes: The case of urban legends. Journal of Personality, 81(6): 1028–1041, 2001. [17] R. Reed Hunt. The subtlety of distinctiveness: What von Restorff really did. Psychonomic Bulletin & Review, 2(1):105–112, 1995. [18] Ira E. Hyman Jr. and David C. Rubin. Memorabeatlia: A naturalistic study of long-term memory. Memory & Cognition, 18(2):205– 214, 1990. [19] Richard R. Klink. Creating brand names with meaning: The use of sound symbolism. Marketing Letters, 11(1):5–20, 2000. [20] Mark L. Knapp, Cynthia Stohl, and Kathleen K. Reardon. “Memorable” messages. Journal of Communication, 31(4):27– 41, 1981. [21] Henry Kuˇcera and W. Nelson Francis. Computational analysis of present-day American English. Dartmouth Publishing Group, 1967. 900 [22] Jure Leskovec, Lada Adamic, and Bernardo Huberman. The dynamics of viral marketing. ACM Transactions on the Web, 1(1), May 2007. [23] Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. In Proceedings of KDD, pages 497–506, 2009. [24] Tina M. Lowrey. The relation between script complexity and commercial memorability. Journal of Advertising, 35(3):7–15, 2006. [25] Rada Mihalcea and Carlo Strapparava. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2):126–142, 2006. [26] Milman Parry and Adam Parry. The making of Homeric verse: The collected papers of Milman Parry. Clarendon Press, Oxford, 1971. [27] Everett Rogers. Diffusion of Innovations. Free Press, fourth edition, 1995. [28] Daniel M. Romero, Brendan Meeder, and Jon Kleinberg. Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on Twitter. Proceedings of WWW, pages 695–704, 2011. [29] David C. Rubin. Very long-term memory for prose and verse. Journal of Verbal Learning and Verbal Behavior, 16(5):611–621, 1977. [30] Nathan Schneider, Rebecca Hwa, Philip Gianfortoni, Dipanjan Das, Michael Heilman, Alan W. Black, Frederick L. Crabbe, and Noah A. Smith. Visualizing topical quotations over time to understand news discourse. Technical Report CMU-LTI-01-103, CMU, 2010. [31] David Strang and Sarah Soule. Diffusion in organizations and social movements: From hybrid corn to poison pills. Annual Review of Sociology, 24:265–290, 1998. [32] Hannah Summerfelt, Louis Lippman, and Ira E. Hyman Jr. The effect of humor on memory: Constrained by the pun. The Journal of General Psychology, 137(4), 2010. [33] Eric Sun, Itamar Rosenn, Cameron Marlow, and Thomas M. Lento. Gesundheit! Modeling contagion through Facebook News Feed. In Proceedings of ICWSM, 2009. [34] Annabel Thorn and Mike Page. Interactions Between Short-Term and Long-Term Memory in the Verbal Domain. Psychology Press, 2009. [35] Louis L. Thurstone. A law of comparative judgment. Psychological Review, 34(4):273– 286, 1927. [36] Oren Tsur and Ari Rappoport. What’s in a Hashtag? Content based prediction of the spread of ideas in microblogging communities. In Proceedings of WSDM, 2012. [37] Fang Wu, Bernardo A. Huberman, Lada A. Adamic, and Joshua R. Tyler. Information flow in social groups. Physica A: Statistical and Theoretical Physics, 337(1-2):327–335, 2004. [38] Shaomei Wu, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. Who says what to whom on Twitter. In Proceedings of WWW, 2011. [39] Jaewon Yang and Jure Leskovec. Patterns of temporal variation in online media. In Proceedings of WSDM, 2011. [40] Eric Yorkston and Geeta Menon. A sound idea: Phonetic effects of brand names on consumer judgments. Journal of Consumer Research, 31 (1):43–51, 2004. 901
2012
94
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 902–911, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics Modeling the Translation of Predicate-Argument Structure for SMT Deyi Xiong, Min Zhang∗, Haizhou Li Human Language Technology Institute for Infocomm Research 1 Fusionopolis Way, #21-01 Connexis, Singapore 138632 {dyxiong, mzhang, hli}@i2r.a-star.edu.sg Abstract Predicate-argument structure contains rich semantic information of which statistical machine translation hasn’t taken full advantage. In this paper, we propose two discriminative, feature-based models to exploit predicateargument structures for statistical machine translation: 1) a predicate translation model and 2) an argument reordering model. The predicate translation model explores lexical and semantic contexts surrounding a verbal predicate to select desirable translations for the predicate. The argument reordering model automatically predicts the moving direction of an argument relative to its predicate after translation using semantic features. The two models are integrated into a state-of-theart phrase-based machine translation system and evaluated on Chinese-to-English translation tasks with large-scale training data. Experimental results demonstrate that the two models significantly improve translation accuracy. 1 Introduction Recent years have witnessed increasing efforts towards integrating predicate-argument structures into statistical machine translation (SMT) (Wu and Fung, 2009b; Liu and Gildea, 2010). In this paper, we take a step forward by introducing a novel approach to incorporate such semantic structures into SMT. Given a source side predicate-argument structure, we attempt to translate each semantic frame (predicate and its associated arguments) into an appropriate target string. We believe that the translation of predicates and reordering of arguments are the two central ∗Corresponding author issues concerning the transfer of predicate-argument structure across languages. Predicates1 are essential elements in sentences. Unfortunately they are usually neither correctly translated nor translated at all in many SMT systems according to the error study by Wu and Fung (2009a). This suggests that conventional lexical and phrasal translation models adopted in those SMT systems are not sufficient to correctly translate predicates in source sentences. Thus we propose a discriminative, feature-based predicate translation model that captures not only lexical information (i.e., surrounding words) but also high-level semantic contexts to correctly translate predicates. Arguments contain information for questions of who, what, when, where, why, and how in sentences (Xue, 2008). One common error in translating arguments is about their reorderings: arguments are placed at incorrect positions after translation. In order to reduce such errors, we introduce a discriminative argument reordering model that uses the position of a predicate as the reference axis to estimate positions of its associated arguments on the target side. In this way, the model predicts moving directions of arguments relative to their predicates with semantic features. We integrate these two discriminative models into a state-of-the-art phrase-based system. Experimental results on large-scale Chinese-to-English translation show that both models are able to obtain significant improvements over the baseline. Our analysis on system outputs further reveals that they can indeed help reduce errors in predicate translations and argument reorderings. 1We only consider verbal predicates in this paper. 902 The paper is organized as follows. In Section 2, we will introduce related work and show the significant differences between our models and previous work. In Section 3 and 4, we will elaborate the proposed predicate translation model and argument reordering model respectively, including details about modeling, features and training procedure. Section 5 will introduce how to integrate these two models into SMT. Section 6 will describe our experiments and results. Section 7 will empirically discuss how the proposed models improve translation accuracy. Finally we will conclude with future research directions in Section 8. 2 Related Work Predicate-argument structures (PAS) are explored for SMT on both the source and target side in some previous work. As PAS analysis widely employs global and sentence-wide features, it is computationally expensive to integrate target side predicateargument structures into the dynamic programming style of SMT decoding (Wu and Fung, 2009b). Therefore they either postpone the integration of target side PASs until the whole decoding procedure is completed (Wu and Fung, 2009b), or directly project semantic roles from the source side to the target side through word alignments during decoding (Liu and Gildea, 2010). There are other previous studies that explore only source side predicate-argument structures. Komachi and Matsumoto (2006) reorder arguments in source language (Japanese) sentences using heuristic rules defined on source side predicate-argument structures in a pre-processing step. Wu et al. (2011) automate this procedure by automatically extracting reordering rules from predicate-argument structures and applying these rules to reorder source language sentences. Aziz et al. (2011) incorporate source language semantic role labels into a tree-to-string SMT system. Although we also focus on source side predicateargument structures, our models differ from the previous work in two main aspects: 1) we propose two separate discriminative models to exploit predicateargument structures for predicate translation and argument reordering respectively; 2) we consider argument reordering as an argument movement (relative to its predicate) prediction problem and use a discriminatively trained classifier for such predictions. Our predicate translation model is also related to previous discriminative lexicon translation models (Berger et al., 1996; Venkatapathy and Bangalore, 2007; Mauser et al., 2009). While previous models predict translations for all words in vocabulary, we only focus on verbal predicates. This will tremendously reduce the amount of training data required, which usually is a problem in discriminative lexicon translation models (Mauser et al., 2009). Furthermore, the proposed translation model also differs from previous lexicon translation models in that we use both lexical and semantic features. Our experimental results show that semantic features are able to further improve translation accuracy. 3 Predicate Translation Model In this section, we present the features and the training process of the predicate translation model. 3.1 Model Following the context-dependent word models in (Berger et al., 1996), we propose a discriminative predicate translation model. The essential component of our model is a maximum entropy classifier pt(e|C(v)) that predicts the target translation e for a verbal predicate v given its surrounding context C(v). The classifier can be formulated as follows. pt(e|C(v)) = exp(P i θifi(e, C(v))) P e′ exp(P i θifi(e′, C(v))) (1) where fi are binary features, θi are weights of these features. Given a source sentence which contains N verbal predicates {vi}N 1 , our predicate translation model Mt can be denoted as Mt = N Y i=1 pt(evi|C(vi)) (2) Note that we do not restrict the target translation e to be a single word. We allow e to be a phrase of length up to 4 words so as to capture multi-word translations for a verbal predicate. For example, a Chinese verb “u1(issue)” can be translated as “to be issued” or “have issued” with modality words. 903 This will increase the number of classes to be predicted by the maximum entropy classifier. But according to our observation, it is still computationally tractable (see Section 3.3). If a verbal predicate is not translated, we set e = NULL so that we can also capture null translations for verbal predicates. 3.2 Features The apparent advantage of discriminative lexicon translation models over generative translation models (e.g., conventional lexical translation model as described in (Koehn et al., 2003)) is that discriminative models allow us to integrate richer contexts (lexical, syntactic or semantic) into target translation prediction. We use two kinds of features to predict translations for verbal predicates: 1) lexical features and 2) semantic features. All features are in the following binary form. f(e, C(v)) =  1, if e = ♣and C(v).♥= ♠ 0, else (3) where the symbol ♣is a placeholder for a possible target translation (up to 4 words), the symbol ♥indicates a contextual (lexical or semantic) element for the verbal predicate v, and the symbol ♠represents the value of ♥. Lexical Features: The lexical element ♥is extracted from the surrounding words of verbal predicate v. We use the preceding 3 words and the succeeding 3 words to define the lexical context for the verbal predicate v. Therefore ♥∈ {w−3, w−2, w−1, v, w1, w2, w3}. Semantic Features: The semantic element ♥is extracted from the surrounding arguments of verbal predicate v. In particular, we define a semantic window centered at the verbal predicate with 6 arguments {A−3, A−2, A−1, A1, A2, A3} where A−3 −A−1 are arguments on the left side of v while A1 −A3 are those on the right side. Different verbal predicates have different number of arguments in different linguistic scenarios. We observe on our training data that the number of arguments for 96.5% verbal predicates on each side (left/right) is not larger than 3. Therefore the defined 6-argument semantic window is sufficient to describe argument contexts for predicates. For each argument Ai in the defined semanf(e, C(v)) = 1 if and only if e = adjourn and C(v).Ah −3 = Sn¬ e = adjourn and C(v).Ar −1 = ARGM-TMP e = adjourn and C(v).Ah 1 = U e = adjourn and C(v).Ar 2 = null e = adjourn and C(v).Ah 3 = null Table 1: Semantic feature examples. tic window, we use its semantic role (i.e., ARG0, ARGM-TMP and so on) Ar i and head word Ah i to define semantic context elements ♥. If an argument Ai does not exist for the verbal predicate v 2, we set the value of both Ar i and Ah i to null. Figure 1 shows a Chinese sentence with its predicate-argument structure and English translation. The verbal predicate “>¬/adjourn” (in bold) has 4 arguments: one in an ARG0 agent role, one in an ARGM-ADV adverbial modifier role, one in an ARGM-TMP temporal modifier role and the last one in an ARG1 patient role. Table 1 shows several semantic feature examples of this verbal predicate. 3.3 Training In order to train the discriminative predicate translation model, we first parse source sentences and labeled semantic roles for all verbal predicates (see details in Section 6.1) in our word-aligned bilingual training data. Then we extract all training events for verbal predicates which occur at least 10 times in the training data. A training event for a verbal predicate v consists of all contextual elements C(v) (e.g., w1, Ah 1) defined in the last section and the target translation e. Using these events, we train one maximum entropy classifier per verbal predicate (16,121 verbs in total) via the off-the-shelf MaxEnt toolkit3. We perform 100 iterations of the L-BFGS algorithm implemented in the training toolkit for each verbal predicate with both Gaussian prior and event cutoff set to 1 to avoid overfitting. After event cutoff, we have an average of 140 classes (target translations) per verbal predicate with the maximum number of classes being 9,226. The training takes an average of 52.6 seconds per verb. In order to expedite the train2For example, the verb v has only two arguments on its left side. Thus argument A−3 doest not exist. 3Available at: http://homepages.inf.ed.ac.uk/lzhang10/ maxent toolkit.html 904 The [Security Council] will adjourn for [4 days] [starting Thursday] Sn¬1 ò2 [g3 ±o4 m©5] > > >¬ ¬ ¬6 [o7 U8] ARG0 ARGM-ADV ARGM-TMP ARG1 Figure 1: An example of predicate-argument structure in Chinese and its aligned English translation. The bold word in Chinese is the verbal predicate. The subscripts on the Chinese sentence show the indexes of words from left to right. ing, we run the training toolkit in a parallel manner. 4 Argument Reordering Model In this section we introduce the discriminative argument reordering model, features and the training procedure. 4.1 Model Since the predicate determines what arguments are involved in its semantic frame and semantic frames tend to be cohesive across languages (Fung et al., 2006), the movements of predicate and its arguments across translations are like the motions of a planet and its satellites. Therefore we consider the reordering of an argument as the motion of the argument relative to its predicate. In particular, we use the position of the predicate as the reference axis. The motion of associated arguments relative to the reference axis can be roughly divided into 3 categories4: 1) no change across languages (NC); 2) moving from the left side of its predicate to the right side of the predicate after translation (L2R); and 3) moving from the right side of its predicate to the left side of the predicate after translation (R2L). Let’s revisit Figure 1. The ARG0, ARGM-ADV and ARG1 are located at the same side of their predicate after being translated into English, therefore the reordering category of these three arguments is assigned as “NC”. The ARGM-TMP is moved from the left side of “>¬/adjourn” to the right side of “adjourn” after translation, thus its reordering category is L2R. In order to predict the reordering category for an argument, we propose a discriminative argument reordering model that uses a maximum en4Here we assume that the translations of arguments are not interrupted by their predicates, other arguments or any words outside the arguments in question. We leave for future research the task of determining whether arguments should be translated as a unit or not. tropy classifier to calculate the reordering category m ∈{NC, L2R, R2L} for an argument A as follows. pr(m|C(A)) = exp(P i θifi(m, C(A))) P m′ exp(P i θifi(m′, C(A))) (4) where C(A) indicates the surrounding context of A. The features fi will be introduced in the next section. We assume that motions of arguments are independent on each other. Given a source sentence with labeled arguments {Ai}N 1 , our discriminative argument reordering model Mr is formulated as Mr = N Y i=1 pr(mAi|C(Ai)) (5) 4.2 Features The features fi used in the argument reordering model still takes the binary form as in Eq. (3). Table 2 shows the features that are used in the argument reordering model. We extract features from both the source and target side. On the source side, the features include the verbal predicate, the semantic role of the argument, the head word and the boundary words of the argument. On the target side, the translation of the verbal predicate, the translation of the head word of the argument, as well as the boundary words of the translation of the argument are used as features. 4.3 Training To train the argument reordering model, we first extract features defined in the last section from our bilingual training data where source sentences are annotated with predicate-argument structures. We also study the distribution of argument reordering categories (i.e.,NC, L2R and R2L) in the training data, which is shown in Table 3. Most arguments, accounting for 82.43%, are on the same side of their verbal predicates after translation. The remaining 905 Features of an argument A for reordering src its verbal predicate Ap its semantic role Ar its head word Ah the leftmost word of A the rightmost word of A tgt the translation of Ap the translation of Ah the leftmost word of the translation of A the rightmost word of the translation of A Table 2: Features adopted in the argument reordering model. Reordering Category Percent NC 82.43% L2R 11.19% R2L 6.38% Table 3: Distribution of argument reordering categories in the training data. arguments (17.57%) are moved either from the left side of their predicates to the right side after translation (accounting for 11.19%) or from the right side to the left side of their translated predicates (accounting for 6.38%). After all features are extracted, we use the maximum entropy toolkit in Section 3.3 to train the maximum entropy classifier as formulated in Eq. (4). We perform 100 iterations of L-BFGS. 5 Integrating the Two Models into SMT In this section, we elaborate how to integrate the two models into phrase-based SMT. In particular, we integrate the models into a phrase-based system which uses bracketing transduction grammars (BTG) (Wu, 1997) for phrasal translation (Xiong et al., 2006). Since the system is based on a CKY-style decoder, the integration algorithms introduced here can be easily adapted to other CKY-based decoding systems such as the hierarchical phrasal system (Chiang, 2007). 5.1 Integrating the Predicate Translation Model It is straightforward to integrate the predicate translation model into phrase-based SMT (Koehn et al., 2003; Xiong et al., 2006). We maintain word alignments for each phrase pair in the phrase table. Given a source sentence with its predicateargument structure, we detect all verbal predicates and load trained predicate translation classifiers for these verbs. Whenever a hypothesis covers a new verbal predicate v, we find the target translation e for v through word alignments and then calculate its translation probability pt(e|C(v)) according to Eq. (1). The predicate translation model (as formulated in Eq. (2)) is integrated into the whole log-linear model just like the conventional lexical translation model in phrase-based SMT (Koehn et al., 2003). The two models are independently estimated but complementary to each other. While the lexical translation model calculates the probability of a verbal predicate being translated given its local lexical context, the discriminative predicate translation model is able to employ both lexical and semantic contexts to predict translations for verbs. 5.2 Integrating the Argument Reordering Model Before we introduce the integration algorithm for the argument reordering model, we define two functions A and N on a source sentence and its predicate-argument structure τ as follows. • A(i, j, τ): from the predicate-argument structure τ, the function finds all predicate-argument pairs which are completely located within the span from source word i to j. For example, in Figure 1, A(3, 6, τ) = {(>¬, ARGM-TMP)} while A(2, 3, τ) = {}, A(1, 5, τ) = {} because the verbal predicate “>¬” is located outside the span (2,3) and (1,5). • N(i, k, j, τ): the function finds all predicateargument pairs that cross the two neighboring spans (i, k) and (k +1, j). It can be formulated as A(i, j, τ) −(A(i, k, τ) S A(k + 1, j, τ)). We then define another function Pr to calculate the argument reordering model probability on all arguments which are found by the previous two functions A and N as follows. Pr(B) = Y A∈B pr(mA|C(A)) (6) 906 where B denotes either A or N. Following (Chiang, 2007), we describe the algorithm in a deductive system. It is shown in Figure 2. The algorithm integrates the argument reordering model into a CKY-style decoder (Xiong et al., 2006). The item [X, i, j] denotes a BTG node X spanning from i to j on the source side. For notational convenience, we only show the argument reordering model probability for each item, ignoring all other sub-model probabilities such as the language model probability. The Eq. (7) shows how we calculate the argument reordering model probability when a lexical rule is applied to translate a source phrase c to a target phrase e. The Eq. (8) shows how we compute the argument reordering model probability for a span (i, j) in a dynamic programming manner when a merging rule is applied to combine its two subspans in a straight (X →[X1, X2]) or inverted order (X →⟨X1, X2⟩). We directly use the probabilities Pr(A(i, k, τ)) and Pr(A(k + 1, j, τ)) that have been already obtained for the two sub-spans (i, k) and (k + 1, j). In this way, we only need to calculate the probability Pr(N(i, k, j, τ)) for predicateargument pairs that cross the two sub-spans. 6 Experiments In this section, we present our experiments on Chinese-to-English translation tasks, which are trained with large-scale data. The experiments are aimed at measuring the effectiveness of the proposed discriminative predicate translation model and argument reordering model. 6.1 Setup The baseline system is the BTG-based phrasal system (Xiong et al., 2006). Our training corpora5 consist of 3.8M sentence pairs with 96.9M Chinese words and 109.5M English words. We ran GIZA++ on these corpora in both directions and then applied the “grow-diag-final” refinement rule to obtain word alignments. We then used all these word-aligned corpora to generate our phrase table. Our 5-gram language model was trained on the Xinhua section of the English Gigaword corpus (306 million words) 5The corpora include LDC2004E12, LDC2004T08, LDC2005T10, LDC2003E14, LDC2002E18, LDC2005T06, LDC2003E07 and LDC2004T07. using the SRILM toolkit (Stolcke, 2002) with modified Kneser-Ney smoothing. To train the proposed predicate translation model and argument reordering model, we first parsed all source sentences using the Berkeley Chinese parser (Petrov et al., 2006) and then ran the Chinese semantic role labeler6 (Li et al., 2010) on all source parse trees to annotate semantic roles for all verbal predicates. After we obtained semantic roles on the source side, we extracted features as described in Section 3.2 and 4.2 and used these features to train our two models as described in Section 3.3 and 4.3. We used the NIST MT03 evaluation test data as our development set, and the NIST MT04, MT05 as the test sets. We adopted the case-insensitive BLEU-4 (Papineni et al., 2002) as the evaluation metric. Statistical significance in BLEU differences was tested by paired bootstrap re-sampling (Koehn, 2004). 6.2 Results Our first group of experiments is to investigate whether the predicate translation model is able to improve translation accuracy in terms of BLEU and whether semantic features are useful. The experimental results are shown in Table 4. From the table, we have the following two observations. • The proposed predicate translation models achieve an average improvement of 0.57 BLEU points across the two NIST test sets when all features (lex+sem) are used. Such an improvement is statistically significant (p < 0.01). According to our statistics, there are 5.07 verbal predicates per sentence in NIST04 and 4.76 verbs per sentence in NIST05, which account for 18.02% and 16.88% of all words in NIST04 and 05 respectively. This shows that not only verbal predicates are semantically important, they also form a major part of the sentences. Therefore, whether verbal predicates are translated correctly or not has a great impact on the translation accuracy of the whole sentence 7. 6Available at: http://nlp.suda.edu.cn/∼jhli/. 7The example in Table 6 shows that the translations of verbs even influences reorderings and translations of neighboring words. 907 X →c/e [X, i, j] : Pr(A(i, j, τ)) (7) X →[X1, X2] or ⟨X1, X2⟩[X1, i, k] : Pr(A(i, k, τ)) [X2, k + 1, j] : Pr(A(k + 1, j, τ)) [X, i, j] : Pr(A(i, k, τ)) · Pr(A(k + 1, j, τ)) · Pr(N(i, k, j, τ)) (8) Figure 2: Integrating the argument reordering model into a BTG-style decoder. Model NIST04 NIST05 Base 35.52 33.80 Base+PTM (lex) 35.71+ 34.09+ Base+PTM (lex+sem) 36.10++** 34.35++* Table 4: Effects of the proposed predicate translation model (PTM). PTM (lex): predicate translation model with lexical features; PTM (lex+sem): predicate translation model with both lexical and semantic features; +/++: better than the baseline (p < 0.05/0.01). */**: better than Base+PTM (lex) (p < 0.05/0.01). Model NIST04 NIST05 Base 35.52 33.80 Base+ARM 35.82++ 34.29++ Base+ARM+PTM 36.19++ 34.72++ Table 5: Effects of the proposed argument reordering model (ARM) and the combination of ARM and PTM. ++: better than the baseline (p < 0.01). • When we integrate both lexical and semantic features (lex+sem) described in Section 3.2, we obtain an improvement of about 0.33 BLEU points over the system where only lexical features (lex) are used. Such a gain, which is statistically significant, confirms the effectiveness of semantic features. Our second group of experiments is to validate whether the argument reordering model is capable of improving translation quality. Table 5 shows the results. We obtain an average improvement of 0.4 BLEU points on the two test sets over the baseline when we incorporate the proposed argument reordering model into our system. The improvements on the two test sets are both statistically significant (p < 0.01). Finally, we integrate both the predicate translation model and argument reordering model into the final system. The two models collectively achieve an improvement of up to 0.92 BLEU points over the baseline, which is shown in Table 5. 7 Analysis In this section, we conduct some case studies to show how the proposed models improve translation accuracy by looking into the differences that they make on translation hypotheses. Table 6 displays a translation example which shows the difference between the baseline and the system enhanced with the predicate translation model. There are two verbal predicates “` /head to” and “ë\/attend” in the source sentence. In order to get the most appropriate translations for these two verbal predicates, we should adopt different ways to translate them. The former should be translated as a corresponding verb word or phrase while the latter into a preposition word “for”. Unfortunately, the baseline incorrectly translates the two verbs. Furthermore, such translation errors even result in undesirable reorderings of neighboring words “Ë|ð/Bethlehem and “‘g/mass”. This indicates that verbal predicate translation errors may lead to more errors, such as inappropriate reorderings or lexical choices for neighboring words. On the contrary, we can see that our predicate translation model is able to help select appropriate words for both verbs. The correct translations of these two verbs also avoid incorrect reorderings of neighboring words. Table 7 shows another example to demonstrate how the argument reordering model improve reorderings. The verbal predicate “?1/carry out” has three arguments, ARG0, ARG-ADV and ARG1. The ARG1 argument should be moved from the right side of the predicate to its left side after translation. The ARG0 argument can either stay on the left side or move to right side of the predicate. Ac908 Base [ê Z] &ä ` ` ` Ë|ð ë ë ë\ \ \ [²S –] ‘g [thousands of] followers to Mass in Bethlehem [Christmas Eve] Base+PTM [ê Z] &ä ` ` ` Ë|ð ë ë ë\ \ \ [²S –] ‘g [thousands of] devotees [rushed to] Bethlehem for [Christmas Eve] mass Ref thousands of worshippers head to Bethlehem for Christmas Midnight mass Table 6: A translation example showing the difference between the baseline and the system with the predicate translation model (PTM). Phrase alignments in the two system outputs are shown with dashed lines. Chinese words in bold are verbal predicates. PAS [k' ù @ /J ´w XÚ] „‡ ? ? ?1 1 1 [ õ ­‡  û] ARG0 ARGM-ADV ARG1 Base [k' ù] @ /J [´w XÚ] „‡ [?1  õ] [­‡  û] the more [important consultations] also set disaster [warning system] Base+ARM k' [ù @] /J [´w XÚ] [„‡ ?1] [ õ] [­‡  û] more [important consultations] on [such a] disaster [warning system] [should be carried out] Ref more important discussions will be held on the disaster warning system Table 7: A translation example showing the difference between the baseline and the system with the argument reordering model (ARM). The predicate-argument structure (PAS) of the source sentence is also displayed in the first row. cording to the phrase alignments of the baseline, we clearly observe three serious translation errors: 1) the ARG0 argument is translated into separate groups which are not adjacent on the target side; 2) the predicate is not translated at all; and 3) the ARG1 argument is not moved to the left side of the predicate after translation. All of these 3 errors are avoided in the Base+ARM system output as a result of the argument reordering model that correctly identifies arguments and moves them in the right directions. 8 Conclusions and Future Work We have presented two discriminative models to incorporate source side predicate-argument structures into SMT. The two models have been integrated into a phrase-based SMT system and evaluated on Chinese-to-English translation tasks using large-scale training data. The first model is the predicate translation model which employs both lexical and semantic contexts to translate verbal predicates. The second model is the argument reordering model which estimates the direction of argument movement relative to its predicate after translation. Experimental results show that both models are able to significantly improve translation accuracy in terms of BLEU score. In the future work, we will extend our predicate translation model to translate both verbal and nominal predicates. Nominal predicates also frequently occur in Chinese sentences and thus accurate translations of them are desirable for SMT. We also want to address another translation issue of arguments as shown in Table 7: arguments are wrongly translated into separate groups instead of a cohesive unit (Wu and Fung, 2009a). We will build an argument segmentation model that follows (Xiong et al., 2011) to determine whether arguments should be translated as a unit or not. 909 References Wilker Aziz, Miguel Rios, and Lucia Specia. 2011. Shallow semantic trees for smt. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 316–322, Edinburgh, Scotland, July. Association for Computational Linguistics. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Pascale Fung, Wu Zhaojun, Yang Yongsheng, and Dekai Wu. 2006. Automatic learning of chinese english semantic structure mapping. In IEEE/ACL 2006 Workshop on Spoken Language Technology (SLT 2006), Aruba, December. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 58–54, Edmonton, Canada, May-June. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Mamoru Komachi and Yuji Matsumoto. 2006. Phrase reordering for statistical machine translation based on predicate-argument structure. In In Proceedings of the International Workshop on Spoken Language Translation: Evaluation Campaign on Spoken Language Translation, pages 77–82. Junhui Li, Guodong Zhou, and Hwee Tou Ng. 2010. Joint syntactic and semantic parsing of chinese. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1108– 1117, Uppsala, Sweden, July. Association for Computational Linguistics. Ding Liu and Daniel Gildea. 2010. Semantic role features for machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 716–724, Beijing, China, August. Coling 2010 Organizing Committee. Arne Mauser, Saˇsa Hasan, and Hermann Ney. 2009. Extending statistical machine translation with discriminative and trigger-based lexicon models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 210–218, Singapore, August. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433–440, Sydney, Australia, July. Association for Computational Linguistics. Andreas Stolcke. 2002. Srilm–an extensible language modeling toolkit. In Proceedings of the 7th International Conference on Spoken Language Processing, pages 901–904, Denver, Colorado, USA, September. Sriram Venkatapathy and Srinivas Bangalore. 2007. Three models for discriminative machine translation using global lexical selection and sentence reconstruction. In Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation, pages 96–102, Rochester, New York, April. Association for Computational Linguistics. Dekai Wu and Pascale Fung. 2009a. Can semantic role labeling improve smt. In Proceedings of the 13th Annual Conference of the EAMT, pages 218–225, Barcelona, May. Dekai Wu and Pascale Fung. 2009b. Semantic roles for smt: A hybrid two-pass model. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 13–16, Boulder, Colorado, June. Association for Computational Linguistics. Xianchao Wu, Katsuhito Sudoh, Kevin Duh, Hajime Tsukada, and Masaaki Nagata. 2011. Extracting preordering rules from predicate-argument structures. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 29–37, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 521–528, Sydney, Australia, July. Association for Computational Linguistics. Deyi Xiong, Min Zhang, and Haizhou Li. 2011. A maximum-entropy segmentation model for statistical machine translation. IEEE Transactions on Audio, Speech and Language Processing, 19(8):2494–2505. 910 Nianwen Xue. 2008. Labeling chinese predicates with semantic roles. Computational Linguistics, 34(2):225–255. 911
2012
95
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 912–920, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics A Ranking-based Approach to Word Reordering for Statistical Machine Translation∗ Nan Yang†, Mu Li‡, Dongdong Zhang‡, and Nenghai Yu† †MOE-MS Key Lab of MCC University of Science and Technology of China [email protected], [email protected] ‡Microsoft Research Asia {muli,dozhang}@microsoft.com Abstract Long distance word reordering is a major challenge in statistical machine translation research. Previous work has shown using source syntactic trees is an effective way to tackle this problem between two languages with substantial word order difference. In this work, we further extend this line of exploration and propose a novel but simple approach, which utilizes a ranking model based on word order precedence in the target language to reposition nodes in the syntactic parse tree of a source sentence. The ranking model is automatically derived from word aligned parallel data with a syntactic parser for source language based on both lexical and syntactical features. We evaluated our approach on largescale Japanese-English and English-Japanese machine translation tasks, and show that it can significantly outperform the baseline phrasebased SMT system. 1 Introduction Modeling word reordering between source and target sentences has been a research focus since the emerging of statistical machine translation. In phrase-based models (Och, 2002; Koehn et al., 2003), phrase is introduced to serve as the fundamental translation element and deal with local reordering, while a distance based distortion model is used to coarsely depict the exponentially decayed word movement probabilities in language translation. Further work in this direction employed lexi∗This work has been done while the first author was visiting Microsoft Research Asia. calized distortion models, including both generative (Koehn et al., 2005) and discriminative (Zens and Ney, 2006; Xiong et al., 2006) variants, to achieve finer-grained estimations, while other work took into account the hierarchical language structures in translation (Chiang, 2005; Galley and Manning, 2008). Long-distance word reordering between language pairs with substantial word order difference, such as Japanese with Subject-Object-Verb (SOV) structure and English with Subject-Verb-Object (SVO) structure, is generally viewed beyond the scope of the phrase-based systems discussed above, because of either distortion limits or lack of discriminative features for modeling. The most notable solution to this problem is adopting syntax-based SMT models, especially methods making use of source side syntactic parse trees. There are two major categories in this line of research. One is tree-to-string model (Quirk et al., 2005; Liu et al., 2006) which directly uses source parse trees to derive a large set of translation rules and associated model parameters. The other is called syntax pre-reordering – an approach that re-positions source words to approximate target language word order as much as possible based on the features from source syntactic parse trees. This is usually done in a preprocessing step, and then followed by a standard phrase-based SMT system that takes the re-ordered source sentence as input to finish the translation. In this paper, we continue this line of work and address the problem of word reordering based on source syntactic parse trees for SMT. Similar to most previous work, our approach tries to rearrange the source tree nodes sharing a common parent to mimic 912 the word order in target language. To this end, we propose a simple but effective ranking-based approach to word reordering. The ranking model is automatically derived from the word aligned parallel data, viewing the source tree nodes to be reordered as list items to be ranked. The ranks of tree nodes are determined by their relative positions in the target language – the node in the most front gets the highest rank, while the ending word in the target sentence gets the lowest rank. The ranking model is trained to directly minimize the mis-ordering of tree nodes, which differs from the prior work based on maximum likelihood estimations of reordering patterns (Li et al., 2007; Genzel, 2010), and does not require any special tweaking in model training. The ranking model can not only be used in a pre-reordering based SMT system, but also be integrated into a phrasebased decoder serving as additional distortion features. We evaluated our approach on large-scale Japanese-English and English-Japanese machine translation tasks, and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both preordering and integrated decoding settings. In the rest of the paper, we will first formally present our ranking-based word reordering model, then followed by detailed steps of modeling training and integration into a phrase-based SMT system. Experimental results are shown in Section 5. Section 6 consists of more discussions on related work, and Section 7 concludes the paper. 2 Word Reordering as Syntax Tree Node Ranking Given a source side parse tree Te, the task of word reordering is to transform Te to T ′ e, so that e′ can match the word order in target language as much as possible. In this work, we only focus on reordering that can be obtained by permuting children of every tree nodes in Te. We use children to denote direct descendants of tree nodes for constituent trees; while for dependency trees, children of a node include not only all direct dependents, but also the head word itself. Figure 1 gives a simple example showing the word reordering between English and Japanese. By rearranging the position of tree nodes in the English I am trying to play music 私は 音楽を 再生 しようと している PRP VBP VBG TO VB NN NP VP VP NP S VP VP S I am trying to play music PRP VBP VBG TO VB NN NP VP VP NP S VP VP 私は 音楽を 再生 しようとしている Original Tree Reordered Tree S j0 j1 j2 j3 j4 e0 e1 e2 e3 e4 e5 j0 j1 j2 j3 j4 e0 e1 e2 e3 e4 e5 Figure 1: An English-to-Japanese sentence pair. By permuting tree nodes in the parse tree, the source sentence is reordered into the target language order. Constituent tree is shown above the source sentence; arrows below the source sentences show head-dependent arcs for dependency tree; word alignment links are lines without arrow between the source and target sentences. parse tree, we can obtain the same word order of Japanese translation. It is true that tree-based reordering cannot cover all word movement operations in language translation, previous work showed that this method is still very effective in practice (Xu et al., 2009, Visweswariah et al., 2010). Following this principle, the word reordering task can be broken into sub-tasks, in which we only need to determine the order of children nodes for all non-leaf nodes in the source parse tree. For a tree node t with children {c1, c2, . . . , cn}, we rearrange the children to target-language-like order {cπ(i1), cπ(i2), . . . , cπ(in)}. If we treat the reordered position π(i) of child ci as its “rank”, the reorder913 ing problem is naturally translated into a ranking problem: to reorder, we determine a “rank” for each child, then the children are sorted according to their “ranks”. As it is often impractical to directly assign a score for each permutation due to huge number of possible permutations, a widely used method is to use a real valued function f to assign a value to each node, which is called a ranking function (Herbrich et al., 2000). If we can guarantee (f(i) −f(j)) and (π(i) −π(j)) always has the same sign, we can get the same permutation as π because values of f are only used to sort the children. For example, consider the node rooted at trying in the dependency tree in Figure 1. Four children form a list {I, am, trying, play} to be ranked. Assuming ranking function f can assign values {0.94, −1.83, −1.50, −1.20} for {I, am, trying, play} respectively, we can get a sorted list {I, play, trying, am}, which is the desired permutation according to the target. More formally, for a tree node t with children {c1, c2, . . . , cn}, our ranking model assigns a rank f(ci, t) for each child ci, then the children are sorted according to the rank in a descending order. The ranking function f has the following form: f(ci, t) = X j θj(ci, t) · wj (1) where the θj is a feature representing the tree node t and its child ci, and wj is the corresponding feature weight. 3 Ranking Model Training To learn ranking function in Equation (1), we need to determine the feature set θ and learn weight vector w from reorder examples. In this section, we first describe how to extract reordering examples from parallel corpus; then we show our features for ranking function; finally, we discuss how to train the model from the extracted examples. 3.1 Reorder Example Acquisition For a sentence pair (e, f, a) with syntax tree Te on the source side, we need to determine which reordered tree T ′ e′ best represents the word order in target sentence f. For a tree node t in Te, if its children align to disjoint target spans, we can simply arrange them in the order of their corresponding target Problem with latter procedure 後者 lies の 手順 問題 で は … in … にある Problem with latter procedure 後者 lies の手順 問題 で は … in … にある (a) gold alignment (b) auto alignment Figure 2: Fragment of a sentence pair. (a) shows gold alignment; (b) shows automatically generated alignment which contains errors. spans. Figure 2 shows a fragment of one sentence pair in our training data. Consider the subtree rooted at word “Problem”. With the gold alignment, “Problem” is aligned to the 5th target word, and “with latter procedure” are aligned to target span [1, 3], thus we can simply put “Problem” after “with latter procedure”. Recursively applying this process down the subtree, we get “latter procedure with Problem” which perfectly matches the target language. As pointed out by (Li et al., 2007), in practice, nodes often have overlapping target spans due to erroneous word alignment or different syntactic structures between source and target sentences. (b) in Figure 2 shows the automatically generated alignment for the sentence pair fragment. The word “with” is incorrectly aligned to the 6th Japanese word “ha”; as a result, “with latter procedure” now has target span [1, 6], while “Problem” aligns to [5, 5]. Due to this overlapping, it becomes unclear which permutation of “Problem” and “with latter procedure” is a better match of the target phrase; we need a better metric to measure word order similarity between reordered source and target sentences. We choose to find the tree T ′ e′ with minimal alignment crossing-link number (CLN) (Genzel, 2010) to f as our golden reordered tree.1 Each crossing1A simple solution is to exclude all trees with overlapping target spans from training. But in our experiment, this method 914 link (i1j1, i2j2) is a pair of alignment links crossing each other. CLN reaches zero if f is monotonically aligned to e′, and increases as there are more word reordering between e′ and f. For example, in Figure 1, there are 6 crossing-links in the original tree: (e1j4, e2j3), (e1j4, e4j2), (e1j4, e5j1), (e2j3, e4j2), (e2j3, e5j1) and (e4j2, e5j1); thus CLN for the original tree is 6. CLN for the reordered tree is 0 as there are no crossing-links. This metric is easy to compute, and is not affected by unaligned words (Genzel, 2010). We need to find the reordered tree with minimal CLN among all reorder candidates. As the number of candidates is in the magnitude exponential with respect to the degree of tree Te 2, it is not always computationally feasible to enumerate through all candidates. Our solution is as follows. First, we give two definitions. • CLN(t): the number of crossing-links (i1j1, i2j2) whose source words e′ i1 and e′ i2 both fall under sub span of the tree node t. • CCLN(t): the number of crossing-links (i1j1, i2j2) whose source words e′ i1 and e′ i2 fall under sub span of t’s two different children nodes c1 and c2 respectively. Apparently CLN of a tree T ′ equals to CLN(root of T ′), and CLN(t) can be recursively expressed as: CLN(t) = CCLN(t) + X child c of t CLN(c) Take the original tree in Figure 1 for example. At the root node trying, CLN(trying) is 6 because there are six crossing-links under its sub-span: (e1j4, e2j3), (e1j4, e4j2), (e1j4, e5j1), (e2j3, e4j2), (e2j3, e5j1) and (e4j2, e5j1). On the other hand, CCLN(trying) is 5 because (e4j2, e5j1) falls under its child node play, thus does not count towards CCLN of trying. From the definition, we can easily see that CCLN(t) can be determined solely by the order of t’s direct children, and CLN(t) is only affected by discarded too many training instances and led to degraded reordering performance. 2In our experiments, there are nodes with more than 10 children for English dependency trees. the reorder in the subtree of t. This observation enables us to divide the task of finding the reordered tree T ′ e′ with minimal CLN into independently finding the children permutation of each node with minimal CCLN. Unfortunately, the time cost for the subtask is still O(n!) for a node with n children. Instead of enumerating through all permutations, we only search the Inversion Transduction Grammar neighborhood of the initial sequence (Tromble, 2009). As pointed out by (Tromble, 2009), the ITG neighborhood is large enough for reordering task, and can be searched through efficiently using a CKY decoder. After finding the best reordered tree T ′ e′, we can extract one reorder example from every node with more than one child. 3.2 Features Features for the ranking model are extracted from source syntax trees. For English-to-Japanese task, we extract features from Stanford English Dependency Tree (Marneffe et al., 2006), including lexicons, Part-of-Speech tags, dependency labels, punctuations and tree distance between head and dependent. For Japanese-to-English task, we use a chunkbased Japanese dependency tree (Kudo and Matsumoto, 2002). Different from features for English, we do not use dependency labels because they are not available from the Japanese parser. Additionally, Japanese function words are also included as features because they are important grammatical clues. The detailed feature templates are shown in Table 1. 3.3 Learning Method There are many well studied methods available to learn the ranking function from extracted examples., ListNet (?) etc. We choose to use RankingSVM (Herbrich et al., 2000), a pair-wised ranking method, for its simplicity and good performance. For every reorder example t with children {c1, c2, . . . , cn} and their desired permutation {cπ(i1), cπ(i2), . . . , cπ(in)}, we decompose it into a set of pair-wised training instances. For any two children nodes ci and cj with i < j , we extract a positive instance if π(i) < π(j), otherwise we extract a negative instance. The feature vector for both positive instance and negative instance is (θci −θcj), where θci and θcj are feature vectors for ci and cj 915 E-J cl cl · dst cl · pct cl · dst · pct cl · lcl cl · rcl cl · lcl · dst cl · rcl · dst cl · clex cl · clex cl · clex · dst cl · clex · dst cl · hlex cl · hlex cl · hlex · dst cl · hlex · dst cl · clex · pct cl · clex · pct cl · hlex · pct cl · hlex · pct J-E ctf ctf · dst ctf · lct ctf · rct ctf · lct · dst cl · rct · dst ctf · clex ctf · clex ctf · clex · dst ctf · clex · dst ctf · hf ctf · hf ctf · hf · dst ctf · hf · dst ctf · hlex ctf · hlex ctf · hlex · dst ctf · hlex · dst Table 1: Feature templates for ranking function. All templates are implicitly conjuncted with the pos tag of head node. c: child to be ranked; h: head node lc: left sibling of c; rc: right sibling of c l: dependency label; t: pos tag lex: top frequency lexicons f: Japanese function word dst: tree distance between c and h pct: punctuation node between c and h respectively. In this way, ranking function learning is turned into a simple binary classification problem, which can be easily solved by a two-class linear support vector machine. 4 Integration into SMT system There are two ways to integrate the ranking reordering model into a phrase-based SMT system: the prereorder method, and the decoding time constraint method. For pre-reorder method, ranking reorder model is applied to reorder source sentences during both training and decoding. Reordered sentences can go through the normal pipeline of a phrase-based decoder. The ranking reorder model can also be integrated into a phrase based decoder. Integrated method takes the original source sentence e as input, and ranking model generates a reordered e′ as a word order reference for the decoder. A simple penalty scheme is utilized to penalize decoder reordering violating ranking reorder model’s prediction e′. In this paper, our underlying decoder is a CKY decoder following Bracketing Transduction Grammar (Wu, 1997; Xiong et al., 2006), thus we show how the penalty is implemented in the BTG decoder as an example. Similar penalty can be designed for other decoders without much effort. Under BTG, three rules are used to derive translations: one unary terminal rule, one straight rule and one inverse rule: A → e/f A → [A1, A2] A → ⟨A1, A2⟩ We have three penalty triggers when any rules are applied during decoding: • Discontinuous penalty fdc: it fires for all rules when source span of either A, A1 or A2 is mapped to discontinuous span in e′. • Wrong straight rule penalty fst: it fires for straight rule when source spans of A1 and A2 are not mapped to two adjacent spans in e′ in straight order. • Wrong inverse rule penalty fiv: it fires for inverse rule when source spans of A1 and A2 are not mapped to two adjacent spans in e′ in inverse order. The above three penalties are added as additional features into the log-linear model of the phrasebased system. Essentially they are soft constraints to encourage the decoder to choose translations with word order similar to the prediction of ranking reorder model. 5 Experiments To test our ranking reorder model, we carry out experiments on large scale English-To-Japanese, and Japanese-To-English translation tasks. 5.1 Data 5.1.1 Evaluation Data We collect 3,500 Japanese sentences and 3,500 English sentences from the web. They come from 916 a wide range of domains, such as technical documents, web forum data, travel logs etc. They are manually translated into the other language to produce 7,000 sentence pairs, which are split into two parts: 2,000 pairs as development set (dev) and the other 5,000 pairs as test set (web test). Beside that, we collect another 999 English sentences from newswire domain which are translated into Japanese to form an out-of-domain test data set (news test). 5.1.2 Parallel Corpus Our parallel corpus is crawled from the web, containing news articles, technical documents, blog entries etc. After removing duplicates, we have about 18 million sentence pairs, which contain about 270 millions of English tokens and 320 millions of Japanese tokens. We use Giza++ (Och and Ney, 2003) to generate the word alignment for the parallel corpus. 5.1.3 Monolingual Corpus Our monolingual Corpus is also crawled from the web. After removing duplicate sentences, we have a corpus of over 10 billion tokens for both English and Japanese. This monolingual corpus is used to train a 4-gram language model for English and Japanese respectively. 5.2 Parsers For English, we train a dependency parser as (Nivre and Scholz, 2004) on WSJ portion of Penn Treebank, which are converted to dependency trees using Stanford Parser (Marneffe et al., 2006). We convert the tokens in training data to lower case, and re-tokenize the sentences using the same tokenizer from our MT system. For Japanese parser, we use CABOCHA, a chunk-based dependency parser (Kudo and Matsumoto, 2002). Some heuristics are used to adapt CABOCHA generated trees to our word segmentation. 5.3 Settings 5.3.1 Baseline System We use a BTG phrase-based system with a MaxEnt based lexicalized reordering model (Wu, 1997; Xiong et al., 2006) as our baseline system for both English-to-Japanese and Japanese-to-English Experiment. The distortion model is trained on the same parallel corpus as the phrase table using a home implemented maximum entropy trainer. In addition, a pre-reorder system using manual rules as (Xu et al., 2009) is included for the Englishto-Japanese experiment (ManR-PR). Manual rules are tuned by a bilingual speaker on the development set. 5.3.2 Ranking Reordering System Ranking reordering model is learned from the same parallel corpus as phrase table. For efficiency reason, we only use 25% of the corpus to train our reordering model. LIBLINEAR (Fan et al., 2008) is used to do the SVM optimization for RankingSVM. We test it on both pre-reorder setting (Rank-PR) and integrated setting (Rank-IT). 5.4 End-to-End Result system dev web test news test E-J Baseline 21.45 21.12 14.18 ManR-PR 23.00 22.42 15.61 Rank-PR 22.92 22.51 15.90 Rank-IT 23.14 22.85 15.72 J-E Baseline 25.39 24.20 14.26 Rank-PR 26.57 25.56 15.42 Rank-IT 26.72 25.87 15.27 Table 2: BLEU(%) score on dev and test data for both E-J and J-E experiment. All settings significantly improve over the baseline at 95% confidence level. Baseline is the BTG phrase system system; ManR-PR is pre-reorder with manual rule; Rank-PR is pre-reorder with ranking reorder model; Rank-IT is system with integrated ranking reorder model. From Table 2, we can see our ranking reordering model significantly improves the performance for both English-to-Japanese and Japanese-to-English experiments over the BTG baseline system. It also out-performs the manual rule set on English-toJapanese result, but the difference is not significant. 5.5 Reordering Performance In order to show whether the improved performance is really due to improved reordering, we would like to measure the reorder performance directly. 917 As we do not have access to a golden reordered sentence set, we decide to use the alignment crossing-link numbers between aligned sentence pairs as the measure for reorder performance. We train the ranking model on 25% of our parallel corpus, and use the rest 75% as test data (auto). We sample a small corpus (575 sentence pairs) and do manual alignment (man-small). We denote the automatic alignment for these 575 sentences as (auto-small). From Table 3, we can see setting auto auto-small man-small None 36.3 35.9 40.1 E-J Oracle 4.3 4.1 7.4 ManR 13.4 13.6 16.7 Rank 12.1 12.8 17.2 J-E Oracle 6.9 7.0 9.4 Rank 15.7 15.3 20.5 Table 3: Reorder performance measured by crossing-link number per sentence. None means the original sentences without reordering; Oracle means the best permutation allowed by the source parse tree; ManR refers to manual reorder rules; Rank means ranking reordering model. our ranking reordering model indeed significantly reduces the crossing-link numbers over the original sentence pairs. On the other hand, the performance of the ranking reorder model still fall far short of oracle, which is the lowest crossing-link number of all possible permutations allowed by the parse tree. By manual analysis, we find that the gap is due to both errors of the ranking reorder model and errors from word alignment and parser. Another thing to note is that the crossing-link number of manual alignment is higher than automatic alignment. The reason is that our annotators tend to align function words which might be left unaligned by automatic word aligner. 5.6 Effect of Ranking Features Here we examine the effect of features for ranking reorder model. We compare their influence on RankingSVM accuracy, alignment crossing-link number, end-to-end BLEU score, and the model size. As Table 4 shows, a major part of reduction of CLN comes from features such as Part-of-Speech tags, Features Acc. CLN BLEU Feat.# E-J tag+label 88.6 16.4 22.24 26k +dst 91.5 13.5 22.66 55k +pct 92.2 13.1 22.73 79k +lex100 92.9 12.1 22.85 347k +lex1000 94.0 11.5 22.79 2,410k +lex2000 95.2 10.7 22.81 3,794k J-E tag+fw 85.0 18.6 25.43 31k +dst 90.3 16.9 25.62 65k +lex100 91.6 15.7 25.87 293k +lex1000 92.4 14.8 25.91 2,156k +lex2000 93.0 14.3 25.84 3,297k Table 4: Effect of ranking features. Acc. is RankingSVM accuracy in percentage on the training data; CLN is the crossing-link number per sentence on parallel corpus with automatically generated word alignment; BLEU is the BLEU score in percentage on web test set on Rank-IT setting (system with integrated rank reordering model); lexn means n most frequent lexicons in the training corpus. dependency labels (for English), function words (for Japanese), and the distance and punctuations between child and head. These features also correspond to BLEU score improvement for End-to-End evaluations. Lexicon features generally continue to improve the RankingSVM accuracy and reduce CLN on training data, but they do not bring further improvement for SMT systems beyond the top 100 most frequent words. Our explanation is that less frequent lexicons tend to help local reordering only, which is already handled by the underlying phrasebased system. 5.7 Performance on different domains From Table 2 we can see that pre-reorder method has higher BLEU score on news test, while integrated model performs better on web test set which contains informal texts. By error analysis, we find that the parser commits more errors on informal texts, and informal texts usually have more flexible translations. Pre-reorder method makes “hard” decision before decoding, thus is more sensitive to parser errors; on the other hand, integrated model is forced to use a longer distortion limit which leads to more search errors during decoding time. It is possible to 918 use system combination method to get the best of both systems, but we leave this to future work. 6 Discussion on Related Work There have been several studies focusing on compiling hand-crafted syntactic reorder rules. Collins et al. (2005), Wang et al. (2007), Ramanathan et al. (2008), Lee et al. (2010) have developed rules for German-English, Chinese-English, English-Hindi and English-Japanese respectively. Xu et al. (2009) designed a clever precedence reordering rule set for translation from English to several SOV languages. The drawback for hand-crafted rules is that they depend upon expert knowledge to produce and are limited to their targeted language pairs. Automatically learning syntactic reordering rules have also been explored in several work. Li et al. (2007) and Visweswariah et al. (2010) learned probability of reordering patterns from constituent trees using either Maximum Entropy or maximum likelihood estimation. Since reordering patterns are matched against a tree node together with all its direct children, data sparseness problem will arise when tree nodes have many children (Li et al., 2007); Visweswariah et al. (2010) also mentioned their method yielded no improvement when applied to dependency trees in their initial experiments. Genzel (2010) dealt with the data sparseness problem by using window heuristic, and learned reordering pattern sequence from dependency trees. Even with the window heuristic, they were unable to evaluate all candidates due to the huge number of possible patterns. Different from the previous approaches, we treat syntax-based reordering as a ranking problem between different source tree nodes. Our method does not require the source nodes to match some specific patterns, but encodes reordering knowledge in the form of a ranking function, which naturally handles reordering between any number of tree nodes; the ranking function is trained by well-established rank learning method to minimize the number of mis-ordered tree nodes in the training data. Tree-to-string systems (Quirk et al., 2005; Liu et al., 2006) model syntactic reordering using minimal or composed translation rules, which may contain reordering involving tree nodes from multiple tree levels. Our method can be naturally extended to deal with such multiple level reordering. For a tree-tostring rule with multiple tree levels, instead of ranking the direct children of the root node, we rank all leaf nodes (Most are frontier nodes (Galley et al., 2006)) in the translation rule. We need to redesign our ranking feature templates to encode the reordering information in the source part of the translation rules. We need to remember the source side context of the rules, the model size would still be much smaller than a full-fledged tree-to-string system because we do not need to explicitly store the target variants for each rule. 7 Conclusion and Future Work In this paper we present a ranking based reordering method to reorder source language to match the word order of target language given the source side parse tree. Reordering is formulated as a task to rank different nodes in the source side syntax tree according to their relative position in the target language. The ranking model is automatically trained to minimize the mis-ordering of tree nodes in the training data. Large scale experiment shows improvement on both reordering metric and SMT performance, with up to 1.73 point BLEU gain in our evaluation test. In future work, we plan to extend the ranking model to handle reordering between multiple levels of source trees. We also expect to explore better way to integrate ranking reorder model into SMT system instead of a simple penalty scheme. Along the research direction of preprocessing the source language to facilitate translation, we consider to not only change the order of the source language, but also inject syntactic structure of the target language into source language by adding pseudo words into source sentences. Acknowledgements Nan Yang and Nenghai Yu were partially supported by Fundamental Research Funds for the Central Universities (No. WK2100230002), National Natural Science Foundation of China (No. 60933013), and National Science and Technology Major Project (No. 2010ZX03004-003). 919 References David Chiang. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Proc. ACL, pages 263-270. Michael Collins, Philipp Koehn and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proc. ACL. R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. 2008. LIBLINEAR: A library for large linear classification. In Journal of Machine Learning Research. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable Inference and Training of Context-Rich Syntactic Translation Models. In Proc. ACL-Coling, pages 961-968. Michel Galley and Christopher D. Manning. 2008. A Simple and Effective Hierarchical Phrase Reordering Model. In Proc. EMNLP, pages 263-270. Dmitriy Genzel. 2010. Automatically Learning Sourceside Reordering Rules for Large Scale Machine Translation. In Proc. Coling, pages 376-384. Ralf Herbrich, Thore Graepel, and Klaus Obermayer 2000. Large Margin Rank Boundaries for Ordinal Regression. In Advances in Large Margin Classifiers, pages 115-132. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne and David Talbot. 2005. Edinborgh System Description for the 2005 IWSLT Speech Translation Evaluation. In International Workshop on Spoken Language Translation. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proc. HLTNAACL, pages 127-133. Taku Kudo, Yuji Matsumoto. 2002. Japanese Dependency Analysis using Cascaded Chunking. In Proc. CoNLL, pages 63-69. Young-Suk Lee, Bing Zhao and Xiaoqiang Luo. 2010. Constituent reordering and syntax models for Englishto-Japanese statistical machine translation. In Proc. Coling. Chi-Ho Li, Minghui Li, Dongdong Zhang, Mu Li and Ming Zhou and Yi Guan 2007. A Probabilistic Approach to Syntax-based Reordering for Statistical Machine Translation. In Proc. ACL, pages 720-727. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-String Alignment Template for Statistical Machine Translation. In Proc. ACL-Coling, pages 609-616. Marie-Catherine de Marneffe, Bill MacCartney and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In LREC 2006 Joakim Nivre and Mario Scholz 2004. Deterministic Dependency Parsing for English Text. In Proc. Coling. Franz J. Och. 2002. Statistical Machine Translation: From Single Word Models to Alignment Template. Ph.D.Thesis, RWTH Aachen, Germany Franz J. Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1): pages 19-51. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency Treelet Translation: Syntactically Informed Phrasal SMT. In Proc. ACL, pages 271-279. A. Ramanathan, Pushpak Bhattacharyya, Jayprasad Hegde, Ritesh M. Shah and Sasikumar M. 2008. Simple syntactic and morphological processing can help English-Hindi Statistical Machine Translation. In Proc. IJCNLP. Roy Tromble. 2009. Search and Learning for the Linear Ordering Problem with an Application to Machine Translation. Ph.D. Thesis. Karthik Visweswariah, Jiri Navratil, Jeffrey Sorensen, Vijil Chenthamarakshan and Nandakishore Kambhatla. 2010. Syntax Based Reordering with Automatically Derived Rules for Improved Statistical Machine Translation. In Proc. Coling, pages 1119-1127. Chao Wang, Michael Collins, Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proc. EMNLP-CoNLL. Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3): pages 377-403. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proc. ACL-Coling, pages 521-528. Peng Xu, Jaeho Kang, Michael Ringgaard, Franz Och. 2009. Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proc. HLTNAACL, pages 376-384. Richard Zens and Hermann Ney. 2006. Discriminative Reordering Models for Statistical Machine Translation. In Proc. Workshop on Statistical Machine Translation, HLT-NAACL, pages 127-133. 920
2012
96
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 921–929, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics Character-Level Machine Translation Evaluation for Languages with Ambiguous Word Boundaries Chang Liu and Hwee Tou Ng Department of Computer Science National University of Singapore 13 Computing Drive, Singapore 117417 {liuchan1,nght}@comp.nus.edu.sg Abstract In this work, we introduce the TESLACELAB metric (Translation Evaluation of Sentences with Linear-programming-based Analysis – Character-level Evaluation for Languages with Ambiguous word Boundaries) for automatic machine translation evaluation. For languages such as Chinese where words usually have meaningful internal structure and word boundaries are often fuzzy, TESLA-CELAB acknowledges the advantage of character-level evaluation over word-level evaluation. By reformulating the problem in the linear programming framework, TESLACELAB addresses several drawbacks of the character-level metrics, in particular the modeling of synonyms spanning multiple characters. We show empirically that TESLACELAB significantly outperforms characterlevel BLEU in the English-Chinese translation evaluation tasks. 1 Introduction Since the introduction of BLEU (Papineni et al., 2002), automatic machine translation (MT) evaluation has received a lot of research interest. The Workshop on Statistical Machine Translation (WMT) hosts regular campaigns comparing different machine translation evaluation metrics (Callison-Burch et al., 2009; Callison-Burch et al., 2010; Callison-Burch et al., 2011). In the WMT shared tasks, many new generation metrics, such as METEOR (Banerjee and Lavie, 2005), TER (Snover et al., 2006), and TESLA (Liu et al., 2010) have consistently outperformed BLEU as judged by the correlations with human judgments. The research on automatic machine translation evaluation is important for a number of reasons. Automatic translation evaluation gives machine translation researchers a cheap and reproducible way to guide their research and makes it possible to compare machine translation methods across different studies. In addition, machine translation system parameters are tuned by maximizing the automatic scores. Some recent research (Liu et al., 2011) has shown evidence that replacing BLEU by a newer metric, TESLA, can improve the human judged translation quality. Despite the importance and the research interest on automatic MT evaluation, almost all existing work has focused on European languages, in particular on English. Although many methods aim to be language neutral, languages with very different characteristics such as Chinese do present additional challenges. The most obvious challenge for Chinese is that of word segmentation. Unlike European languages, written Chinese is not split into words. Segmenting Chinese sentences into words is a natural language processing task in its own right (Zhao and Liu, 2010; Low et al., 2005). However, many different segmentation standards exist for different purposes, such as Microsoft Research Asia (MSRA) for Named Entity Recognition (NER), Chinese Treebank (CTB) for parsing and part-of-speech (POS) tagging, and City University of Hong Kong (CITYU) and Academia Sinica (AS) for general word segmentation and POS tagging. It is not clear which standard is the best in a given scenario. The only prior work attempting to address the problem of word segmentation in automatic MT evaluation for Chinese that we are aware of is Li et 921 买 伞 buy umbrella 买 雨伞 buy umbrella 买 雨 伞 buy rain umbrella Figure 1: Three forms of the same expression buy umbrella in Chinese al. (2011). The work compared various MT evaluation metrics (BLEU, NIST, METEOR, GTM, 1 −TER) with different segmentation schemes, and found that treating every single character as a token (character-level MT evaluation) gives the best correlation with human judgments. 2 Motivation Li et al. (2011) identify two reasons that characterbased metrics outperform word-based metrics. For illustrative purposes, we use Figure 1 as a running example in this paper. All three expressions are semantically identical (buy umbrella). The first two forms are identical because 雨伞1 and 伞are synonyms. The last form is simply an (arguably wrong) alternative segmented form of the second expression. 1. Word-based metrics do not award partial matches, e.g., 买_雨伞and 买_伞would be penalized for the mismatch between 雨伞and 伞. Character-based metrics award the match between characters 伞and 伞. 2. Character-based metrics do not suffer from errors and differences in word segmentation, so 买_雨伞and 买_雨_伞would be judged exactly equal. Li et al. (2011) conduct empirical experiments to show that character-based metrics consistently outperform their word-based counterparts. Despite that, we observe two important problems for the character-based metrics: 1. Although partial matches are partially awarded, the mechanism breaks down for n-grams where 1Literally, rain umbrella. n > 1. For example, between 买_雨_伞and 买_伞, higher-order n-grams such as 买_雨and 雨_伞still have no match, and will be penalized accordingly, even though 买_雨_伞and 买_伞should match exactly. N-grams such as 买_雨which cross natural word boundaries and are meaningless by themselves can be particularly tricky. 2. Character-level metrics can utilize only a small part of the Chinese synonym dictionary, such as 你and 您(you). The majority of Chinese synonyms involve more than one character, such as 雨伞and 伞(umbrella), and 儿童and 小孩 (child). In this work, we attempt to address both of these issues by introducing TESLA-CELAB, a characterlevel metric that also models word-level linguistic phenomenon. We formulate the n-gram matching process as a real-valued linear programming problem, which can be solved efficiently. The metric is based on the TESLA automatic MT evaluation framework (Liu et al., 2010; Dahlmeier et al., 2011). 3 The Algorithm 3.1 Basic Matching We illustrate our matching algorithm using the examples in Figure 1. Let 买雨伞be the reference, and 买伞be the candidate translation. We use Cilin (同义词词林)2 as our synonym dictionary. The basic n-gram matching problem is shown in Figure 2. Two n-grams are connected if they are identical, or if they are identified as synonyms by Cilin. Notice that all n-grams are put in the same matching problem regardless of n, unlike in translation evaluation metrics designed for European languages. This enables us to designate ngrams with different values of n as synonyms, such as 雨伞(n = 2) and 伞(n = 1). In this example, we are able to make a total of two successful matches. The recall is therefore 2/6 and the precision is 2/3. 2http://ir.hit.edu.cn/phpwebsite/index.php?module=pagemaster &PAGE_user_op=view_page&PAGE_id=162 922 买 雨 伞 买雨 雨伞 买雨伞 买 伞 买伞 买 Figure 2: The basic n-gram matching problem 买 雨 伞 买雨 雨伞 买雨伞 买 伞 买伞 买 Figure 3: The n-gram matching problem after phrase matching 3.2 Phrase Matching We note in Figure 2 that the trigram 买雨伞and the bigram 买伞are still unmatched, even though the match between 雨伞and 伞should imply the match between 买雨伞and 买伞. We infer the matching of such phrases using a dynamic programming algorithm. Two n-grams are considered synonyms if they can be segmented into synonyms that are aligned. With this extension, we are able to match 买雨伞and 买伞(since 买 matches 买and 雨伞matches 伞). The matching problem is now depicted by Figure 3. The linear programming problem is mathematically described as follows. The variables w(·, ·) are the weights assigned to the edges, w(买, 买) ∈[0, 1] w(伞, 伞) ∈[0, 1] w(雨伞, 伞) ∈[0, 1] w(买雨伞, 买伞) ∈[0, 1] We require that for any node N, the sum of weights assigned to edges linking N must not exceed one. wref(买) = w(买, 买) wref(伞) = w(伞, 伞) wref(雨伞) = w(雨伞, 伞) wref(买雨伞) = w(买雨伞, 买伞) 伞 雨 伞 雨伞 Figure 4: A covered n-gram matching problem wcand(买) = w(买, 买) wcand(伞) = w(伞, 伞) + w(雨伞, 伞) wcand(买伞) = w(买雨伞, 买伞) where wref(X) ∈[0, 1] ∀X wcand(X) ∈[0, 1] ∀X Now we maximize the total match, w(买, 买)+w(伞, 伞)+w(雨伞, 伞)+w(买雨伞, 买伞) In this example, the best match is 3, resulting in a recall of 3/6 and a precision of 3/3. 3.3 Covered Matching In Figure 3, n-grams 雨and 买雨in the reference remain impossible to match, which implies misguided penalty for the candidate translation. We observe that since 买雨伞has been matched, all its sub-ngrams should be considered matched as well, including 雨and 买雨. We call this the covered n-gram matching rule. This relationship is implicit in the matching problem for English translation evaluation metrics where words are well delimited. But with phrase matching in Chinese, it must be modeled explicitly. However, we cannot simply perform covered ngram matching as a post processing step. As an example, suppose we are matching phrases 雨伞and 伞, as shown in Figure 4. The linear programming solver may come up with any of the solutions where w(伞, 伞) + w(雨伞, 伞) = 1, w(伞, 伞) ∈[0, 1], and w(雨伞, 伞) ∈[0, 1]. To give the maximum coverage for the node 雨, only the solution w(伞, 伞) = 0, w(雨伞, 伞) = 1 is accepted. This indicates the need to model covered 923 n-gram matching in the linear programming problem itself. We return to the matching of the reference 买雨 伞and the candidate 买伞in Figure 3. On top of the w(·) variables already introduced, we add the variables maximum covering weights c(·). Each c(X) represents the maximum w(Y ) variable where ngram Y completely covers n-gram X. cref(买) ≤max(wref(买), wref(买雨), wref(买雨伞)) cref(雨) ≤max(wref(雨), wref(买雨), wref(雨伞), wref(买雨伞)) cref(伞) ≤max(wref(伞), wref(雨伞), wref(买雨伞)) cref(买雨) ≤max(wref(买雨), wref(买雨伞)) cref(雨伞) ≤max(wref(雨伞), wref(买雨伞)) cref(买雨伞) ≤wref(买雨伞) ccand(买) ≤max(wcand(买), wcand(买伞)) ccand(伞) ≤max(wcand(伞), wcand(买伞)) ccand(买伞) ≤wcand(买伞) where cref(X) ∈[0, 1] ∀X ccand(X) ∈[0, 1] ∀X However, the max(·) operator is not allowed in the linear programming formulation. We get around this by approximating max(·) with the sum instead. Hence, cref(买) ≤wref(买) + wref(买雨)+ wref(买雨伞) cref(雨) ≤wref(雨) + wref(买雨)+ wref(雨伞) + wref(买雨伞) . . . We justify this approximation by the following observation. Consider the sub-problem consisting of just the w(·, ·), wref(·), wcand(·) variables and their associated constraints. This sub-problem can be seen as a maximum flow problem where all constants are integers, hence there exists an optimal solution where each of the w variables is assigned a value of either 0 or 1. For such a solution, the max and the sum forms are equivalent, since the cref(·) and ccand(·) variables are also constrained to the range [0, 1]. The maximum flow equivalence breaks down when the c(·) variables are introduced, so in the general case, replacing max with sum is only an approximation. Returning to our sample problem, the linear programming solver simply needs to assign: w(买雨伞, 买伞) = 1 wref(买雨伞) = 1 wcand(买伞) = 1 Consequently, due to the maximum covering weights constraint, we can give the following value assignment, implying that all n-grams have been matched. cref(X) = 1 ∀X ccand(X) = 1 ∀X 3.4 The Objective Function We now define our objective function in terms of the c(·) variables. The recall is a function of P X cref(X), and the precision is a function of P Y ccand(Y ), where X is the set of all n-grams of the reference, and Y is the set of all n-grams of the candidate translation. Many prior translation evaluation metrics such as MAXSIM (Chan and Ng, 2008) and TESLA (Liu et al., 2010; Dahlmeier et al., 2011) use the F-0.8 measure as the final score: F0.8 = Precision × Recall 0.8 × Precision + 0.2 × Recall Under some simplifying assumptions — specifically, that precision = recall — basic calculus shows that F0.8 is four times as sensitive to recall than to precision. Following the same reasoning, we want to place more emphasis on recall than on precision. We are also constrained by the linear programming framework, hence we set the objective function as 1 Z X X cref(X) + f X Y ccand(Y ) ! 0 < f < 1 924 We set f = 0.25 so that our objective function is also four times as sensitive to recall than to precision.3 The value of this objective function is our TESLA-CELAB score. Similar to the other TESLA metrics, when there are N multiple references, we match the candidate translation against each of them and use the average of the N objective function values as the segment level score. System level score is the average of all the segment level scores. Z is a normalizing constant to scale the metric to the range [0, 1], chosen so that when all the c(·) variables have the value of one, our metric score attains the value of one. 4 Experiments In this section, we test the effectiveness of TESLACELAB on some real-world English-Chinese translation tasks. 4.1 IWSLT 2008 English-Chinese CT The test set of the IWSLT 2008 (Paul, 2008) English-Chinese ASR challenge task (CT) consists of 300 sentences of spoken language text. The average English source sentence is 5.8 words long and the average Chinese reference translation is 9.2 characters long. The domain is travel expressions. The test set was translated by seven MT systems, and each translation has been manually judged for adequacy and fluency. Adequacy measures whether the translation conveys the correct meaning, even if the translation is not fully fluent, whereas fluency measures whether a translation is fluent, regardless of whether the meaning is correct. Due to high evaluation costs, adequacy and fluency assessments were limited to the translation outputs of four systems. In addition, the translation outputs of the MT systems are also manually ranked according to their translation quality. Inter-judge agreement is measured by the Kappa coefficient, defined as: Kappa = P(A) −P(E) 1 −P(E) where P(A) is the percentage of agreement, and P(E) is the percentage of agreement by pure 3Our empirical experiments suggest that the correlations do plateau near this value. For simplicity, we choose not to tune f on the training data. Judgment Set 2 3 1 0.4406 0.4355 2 0.4134 Table 1: Inter-judge Kappa for the NIST 2008 EnglishChinese task chance. The inter-judge Kappa is 0.41 for fluency, 0.40 for adequacy, and 0.57 for ranking. Kappa values between 0.4 and 0.6 are considered moderate, and the numbers are in line with other comparable experiments. 4.2 NIST 2008 English-Chinese MT Task The NIST 2008 English-Chinese MT task consists of 127 documents with 1,830 segments, each with four reference translations and eleven automatic MT system translations. The data is available as LDC2010T01 from the Linguistic Data Consortiuim (LDC). The domain is newswire texts. The average English source sentence is 21.5 words long and the average Chinese reference translation is 43.2 characters long. Since no manual evaluation is given for the data set, we recruited twelve bilingual judges to evaluate the first thirty documents for adequacy and fluency (355 segments for a total of 355 × 11 = 3, 905 translated segments). The final score of a sentence is the average of its adequacy and fluency scores. Each judge works on one quarter of the sentences so that each translation is judged by three judges. The judgments are concatenated to form three full sets of judgments. We ignore judgments where two sentences are equal in quality, so that there are only two possible outcomes (X is better than Y; or Y is better than X), and P(E) = 1/2. The Kappa values are shown in Table 1. The values indicate moderate agreement, and are in line with other comparable experiments. 4.3 Baseline Metrics 4.3.1 BLEU Although word-level BLEU has often been found inferior to the new-generation metrics when the target language is English or other European languages, prior research has shown that character-level BLEU is highly competitive when the target language is Chinese (Li et al., 2011). Therefore, we 925 Segment Pearson Spearman rank Metric Type consistency correlation correlation BLEU character-level 0.7004 0.9130 0.9643 TESLA-M word-level 0.6771 0.9167 0.8929 TESLA-CELAB− character-level 0.7018 0.9229 0.9643 TESLA-CELAB hybrid 0.7281∗ 0.9490∗∗ 0.9643 Table 2: Correlation with human judgment on the IWSLT 2008 English-Chinese challenge task. * denotes better than the BLEU baseline at 5% significance level. ** denotes better than the BLEU baseline at 1% significance level. Segment Pearson Spearman rank Metric Type consistency correlation correlation BLEU character-level 0.7091 0.8429 0.7818 TESLA-M word-level 0.6969 0.8301 0.8091 TESLA-CELAB− character-level 0.7158 0.8514 0.8227 TESLA-CELAB hybrid 0.7162 0.8923∗∗ 0.8909∗∗ Table 3: Correlation with human judgment on the NIST 2008 English-Chinese MT task. ** denotes better than the BLEU baseline at 1% significance level. use character-level BLEU as our main baseline. The correlations of character-level BLEU and the average human judgments are shown in the first row of Tables 2 and 3 for the IWSLT and the NIST data set, respectively. Segment-level consistency is defined as the number of correctly predicted pairwise rankings divided by the total number of pairwise rankings. Ties are excluded from the calculation. We also report the Pearson correlation and the Spearman rank correlation of the system-level scores. Note that in the IWSLT data set, the Spearman rank correlation is highly unstable due to the small number of participating systems. 4.3.2 TESLA-M In addition to character-level BLEU, we also present the correlations for the word-level metric TESLA. Compared to BLEU, TESLA allows more sophisticated weighting of n-grams and measures of word similarity including synonym relations. It has been shown to give better correlations than BLEU for many European languages including English (Callison-Burch et al., 2011). However, its use of POS tags and synonym dictionaries prevents its use at the character-level. We use TESLA as a representative of a competitive word-level metric. We use the Stanford Chinese word segmenter (Tseng et al., 2005) and POS tagger (Toutanova et al., 2003) for preprocessing and Cilin for synonym definition during matching. TESLA has several variants, and the simplest and often the most robust, TESLA-M, is used in this work. The various correlations are reported in the second row of Tables 2 and 3. The scores show that word-level TESLA-M has no clear advantage over character-level BLEU, despite its use of linguistic features. We consider this conclusion to be in line with that of Li et al. (2011). 4.4 TESLA-CELAB In all our experiments here we use TESLA-CELAB with n-grams for n up to four, since the vast majority of Chinese words, and therefore synonyms, are at most four characters long. The correlations between the TESLA-CELAB scores and human judgments are shown in the last row of Tables 2 and 3. We conducted significance testing using the resampling method of (Koehn, 2004). Entries that outperform the BLEU baseline at 5% significance level are marked with ‘*’, and those that outperform at the 1% significance level are marked with ‘**’. The results indicate that TESLA-CELAB significantly outperforms BLEU. For comparison, we also run TESLA-CELAB without the use of the Cilin dictionary, reported in the third row of Tables 2 and 3 and denoted as TESLA-CELAB−. This disables TESLA926 CELAB’s ability to detect word-level synonyms and turns TESLA-CELAB into a linear programming based character-level metric. The performance of TESLA-CELAB−is comparable to the characterlevel BLEU baseline. Note that • TESLA-M can process word-level synonyms, but does not award character-level matches. • TESLA-CELAB−and character-level BLEU award character-level matches, but do not consider word-level synonyms. • TESLA-CELAB can process word-level synonyms and can award character-level matches. Therefore, the difference between TESLA-M and TESLA-CELAB highlights the contribution of character-level matching, and the difference between TESLA-CELAB−and TESLA-CELAB highlights the contribution of word-level synonyms. 4.5 Sample Sentences Some sample sentences taken from the IWSLT test set are shown in Table 4 (some are simplified from the original). The Cilin dictionary correctly identified the following as synonyms: 周 = 星期 week 女儿 = 闺女 daughter 你 = 您 you 工作 = 上班 work The dictionary fails to recognize the following synonyms: 一个 = 个 a 这儿 = 这里 here However, partial awards are still given for the matching characters 这and 个. Based on these synonyms, TESLA-CELAB is able to award less trivial n-gram matches, such as 下 周=下星期, 个女儿=个闺女, and 工作吗=上班吗, as these pairs can all be segmented into aligned synonyms. The covered n-gram matching rule is then able to award tricky n-grams such as 下星, 个女, 个 闺, 作吗and 班吗, which are covered by 下星期, 个女儿, 个闺女, 工作吗and 上班吗respectively. Note also that the word segmentations shown in these examples are for clarity only. The TESLACELAB algorithm does not need pre-segmented Reference: 下 周 。 next week . Candidate: 下 星期 。 next week . Reference: 我 有 一个 女儿 。 I have a daughter . Candidate: 我 有 个 闺女 。 I have a daughter . Reference: 你 在 这儿 工作 吗 ? you at here work qn ? Candidate: 您 在 这里 上班 吗 ? you at here work qn ? Table 4: Sample sentences from the IWSLT 2008 test set Schirm kaufen umbrella buy Regenschirm kaufen umbrella buy Regen schirm kaufen rain umbrella buy Figure 5: Three forms of buy umbrella in German sentences, and essentially finds multi-character synonyms opportunistically. 5 Discussion and Future Work 5.1 Other Languages with Ambiguous Word Boundaries Although our experiments here are limited to Chinese, many other languages have similarly ambiguous word boundaries. For example, in German, the exact counterpart to our example exists, as depicted in Figure 5. Regenschirm, literally rain-umbrella, is a synonym of Schirm. The first two forms in Figure 5 appear in natural text, and in standard BLEU, they would be penalized for the non-matching words Schirm and Regenschirm. Since compound nouns such as Regenschirm are very common in German and generate many out-of-vocabulary words, a common preprocessing step in German translation (and translation evaluation to a lesser extent) is to split compound words, and we end up with the last form Regen schirm kaufen. This process is analogous to 927 Chinese word segmentation. We plan to conduct experiments on German and other Asian languages with the same linguistic phenomenon in future work. 5.2 Fractional Similarity Measures In the current formulation of TESLA-CELAB, two n-grams X and Y are either synonyms which completely match each other, or are completely unrelated. In contrast, the linear-programming based TESLA metric allows fractional similarity measures between 0 (completely unrelated) and 1 (exact synonyms). We can then award partial scores for related words, such as those identified as such by WordNet or those with the same POS tags. Supporting fractional similarity measures is nontrivial in the TESLA-CELAB framework. We plan to address this in future work. 5.3 Fractional Weights for N-grams The TESLA-M metric allows each n-gram to have a weight, which is primarily used to discount function words. TESLA-CELAB can support fractional weights for n-grams as well by the following extension. We introduce a function m(X) that assigns a weight in [0, 1] for each n-gram X. Accordingly, our objective function is replaced by: 1 Z X X m(X)cref(X) + f X Y m(Y )ccand(Y ) ! where Z is a normalizing constant so that the metric has a range of [0, 1]. Z = X X m(X) + f X Y m(Y ) However, experiments with different weight functions m(·) on the test data set failed to find a better weight function than the currently implied m(·) = 1. This is probably due to the linguistic characteristics of Chinese, where human judges apparently give equal importance to function words and content words. In contrast, TESLA-M found discounting function words very effective for English and other European languages such as German. We plan to investigate this in the context of non-Chinese languages. 6 Conclusion In this work, we devise a new MT evaluation metric in the family of TESLA (Translation Evaluation of Sentences with Linear-programming-based Analysis), called TESLA-CELAB (Character-level Evaluation for Languages with Ambiguous word Boundaries), to address the problem of fuzzy word boundaries in the Chinese language, although neither the phenomenon nor the method is unique to Chinese. Our metric combines the advantages of characterlevel and word-level metrics: 1. TESLA-CELAB is able to award scores for partial word-level matches. 2. TESLA-CELAB does not have a segmentation step, hence it will not introduce word segmentation errors. 3. TESLA-CELAB is able to take full advantage of the synonym dictionary, even when the synonyms differ in the number of characters. We show empirically that TESLA-CELAB significantly outperforms the strong baseline of character-level BLEU in two well known English-Chinese MT evaluation data sets. The source code of TESLA-CELAB is available from http://nlp.comp.nus.edu.sg/software/. Acknowledgments This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office. References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Josh Schroeder. 2009. Findings of the 2009 workshop on statistical machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation. 928 Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F. Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Yee Seng Chan and Hwee Tou Ng. 2008. MAXSIM: A maximum similarity metric for machine translation evaluation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Daniel Dahlmeier, Chang Liu, and Hwee Tou Ng. 2011. TESLA at WMT2011: Translation evaluation and tunable metric. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Maoxi Li, Chengqing Zong, and Hwee Tou Ng. 2011. Automatic evaluation of Chinese translation output: word-level or character-level? In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Short Papers. Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. TESLA: Translation evaluation of sentences with linear-programming-based analysis. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR. Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2011. Better evaluation metrics lead to better machine translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Michael Paul. 2008. Overview of the iwslt 2008 evaluation campaign. In Proceedings of the International Workshop on Spoken Language Translation. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the Association for Machine Translation in the Americas. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for SIGHAN bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. Hongmei Zhao and Qun Liu. 2010. The CIPS-SIGHAN CLP 2010 Chinese word segmentation bakeoff. In Proceedings of the Joint Conference on Chinese Language Processing. 929
2012
97
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 930–939, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics PORT: a Precision-Order-Recall MT Evaluation Metric for Tuning Boxing Chen, Roland Kuhn and Samuel Larkin National Research Council Canada 283 Alexandre-Taché Boulevard, Gatineau (Québec), Canada J8X 3X7 {Boxing.Chen, Roland.Kuhn, Samuel.Larkin}@nrc.ca Abstract Many machine translation (MT) evaluation metrics have been shown to correlate better with human judgment than BLEU. In principle, tuning on these metrics should yield better systems than tuning on BLEU. However, due to issues such as speed, requirements for linguistic resources, and optimization difficulty, they have not been widely adopted for tuning. This paper presents PORT 1, a new MT evaluation metric which combines precision, recall and an ordering metric and which is primarily designed for tuning MT systems. PORT does not require external resources and is quick to compute. It has a better correlation with human judgment than BLEU. We compare PORT-tuned MT systems to BLEU-tuned baselines in five experimental conditions involving four language pairs. PORT tuning achieves consistently better performance than BLEU tuning, according to four automated metrics (including BLEU) and to human evaluation: in comparisons of outputs from 300 source sentences, human judges preferred the PORT-tuned output 45.3% of the time (vs. 32.7% BLEU tuning preferences and 22.0% ties). 1 Introduction Automatic evaluation metrics for machine translation (MT) quality are a key part of building statistical MT (SMT) systems. They play two 1 PORT: Precision-Order-Recall Tunable metric. roles: to allow rapid (though sometimes inaccurate) comparisons between different systems or between different versions of the same system, and to perform tuning of parameter values during system training. The latter has become important since the invention of minimum error rate training (MERT) (Och, 2003) and related tuning methods. These methods perform repeated decoding runs with different system parameter values, which are tuned to optimize the value of the evaluation metric over a development set with reference translations. MT evaluation metrics fall into three groups: • BLEU (Papineni et al., 2002), NIST (Doddington, 2002), WER, PER, TER (Snover et al., 2006), and LRscore (Birch and Osborne, 2011) do not use external linguistic information; they are fast to compute (except TER). • METEOR (Banerjee and Lavie, 2005), METEOR-NEXT (Denkowski and Lavie 2010), TER-Plus (Snover et al., 2009), MaxSim (Chan and Ng, 2008), TESLA (Liu et al., 2010), AMBER (Chen and Kuhn, 2011) and MTeRater (Parton et al., 2011) exploit some limited linguistic resources, such as synonym dictionaries, part-of-speech tagging, paraphrasing tables or word root lists. • More sophisticated metrics such as RTE (Pado et al., 2009), DCU-LFG (He et al., 2010) and MEANT (Lo and Wu, 2011) use higher level syntactic or semantic analysis to score translations. Among these metrics, BLEU is the most widely used for both evaluation and tuning. Many of the metrics correlate better with human judgments of translation quality than BLEU, as shown in recent WMT Evaluation Task reports (Callison-Burch et 930 al., 2010; Callison-Burch et al., 2011). However, BLEU remains the de facto standard tuning metric, for two reasons. First, there is no evidence that any other tuning metric yields better MT systems. Cer et al. (2010) showed that BLEU tuning is more robust than tuning with other metrics (METEOR, TER, etc.), as gauged by both automatic and human evaluation. Second, though a tuning metric should correlate strongly with human judgment, MERT (and similar algorithms) invoke the chosen metric so often that it must be computed quickly. Liu et al. (2011) claimed that TESLA tuning performed better than BLEU tuning according to human judgment. However, in the WMT 2011 “tunable metrics” shared pilot task, this did not hold (Callison-Burch et al., 2011). In (Birch and Osborne, 2011), humans preferred the output from LRscore-tuned systems 52.5% of the time, versus BLEU-tuned system outputs 43.9% of the time. In this work, our goal is to devise a metric that, like BLEU, is computationally cheap and language-independent, but that yields better MT systems than BLEU when used for tuning. We tried out different combinations of statistics before settling on the final definition of our metric. The final version, PORT, combines precision, recall, strict brevity penalty (Chiang et al., 2008) and strict redundancy penalty (Chen and Kuhn, 2011) in a quadratic mean expression. This expression is then further combined with a new measure of word ordering, v, designed to reflect long-distance as well as short-distance word reordering (BLEU only reflects short-distance reordering). In a later section, 3.3, we describe experiments that vary parts of the definition of PORT. Results given below show that PORT correlates better with human judgments of translation quality than BLEU does, and sometimes outperforms METEOR in this respect, based on data from WMT (2008-2010). However, since PORT is designed for tuning, the most important results are those showing that PORT tuning yields systems with better translations than those produced by BLEU tuning – both as determined by automatic metrics (including BLEU), and according to human judgment, as applied to five data conditions involving four language pairs. 2 BLEU and PORT First, define n-gram precision p(n) and recall r(n): ) ( grams n # ) ( grams n # ) ( T R T n p ∩ = (1) ) ( grams n # ) ( grams n # ) ( R R T n r ∩ = (2) where T = translation, R = reference. Both BLEU and PORT are defined on the document-level, i.e. T and R are whole texts. If there are multiple references, we use closest reference length for each translation hypothesis to compute the numbers of the reference n-grams. 2.1 BLEU BLEU is composed of precision Pg(N) and brevity penalty BP: BP N P BLEU g × = ) ( (3) where Pg(N) is the geometric average of n-gram precisions N N n g n p N P 1 1 ) ( ) (       = ∏ = (4) The BLEU brevity penalty punishes the score if the translation length len(T) is shorter than the reference length len(R); it is: ( ) ) ( /) ( 1 ,0.1 min T len R len e BP − = (5) 2.2 PORT PORT has five components: precision, recall, strict brevity penalty (Chiang et al., 2008), strict redundancy penalty (Chen and Kuhn, 2011) and an ordering measure v. The design of PORT is based on exhaustive experiments on a development data set. We do not have room here to give a rationale for all the choices we made when we designed PORT. However, a later section (3.3) reconsiders some of these design decisions. 2.2.1 Precision and Recall The average precision and average recall used in PORT (unlike those used in BLEU) are the arithmetic average of n-gram precisions Pa(N) and recalls Ra(N): ∑ = = N n a n p N N P 1 ) ( 1 ) ( (6) ∑ = = N n a n r N N R 1 ) ( 1 ) ( (7) 931 We use two penalties to avoid too long or too short MT outputs. The first, the strict brevity penalty (SBP), is proposed in (Chiang et al., 2008). Let ti be the translation of input sentence i, and let ri be its reference. Set       − = ∑ ∑ i i i i i r t r SBP |} ||, min{| | | 1 exp (8) The second is the strict redundancy penalty (SRP), proposed in (Chen and Kuhn, 2011):         − = ∑ ∑ i i i i i r r t SRP | | |} ||, max{| 1 exp (9) To combine precision and recall, we tried four averaging methods: arithmetic (A), geometric (G), harmonic (H), and quadratic (Q) mean. If all of the values to be averaged are positive, the order is max Q A G H min ≤ ≤ ≤ ≤ ≤ , with equality holding if and only if all the values being averaged are equal. We chose the quadratic mean to combine precision and recall, as follows: 2 ) ) ( ( ) ) ( ( ) ( 2 2 SRP N R SBP N P N Qmean a a × + × = (10) 2.2.2 Ordering Measure Word ordering measures for MT compare two permutations of the original source-language word sequence: the permutation represented by the sequence of corresponding words in the MT output, and the permutation in the reference. Several ordering measures have been integrated into MT evaluation metrics recently. Birch and Osborne (2011) use either Hamming Distance or Kendall’s τ Distance (Kendall, 1938) in their metric LRscore, thus obtaining two versions of LRscore. Similarly, Isozaki et al. (2011) adopt either Kendall’s τ Distance or Spearman’s ρ (Spearman, 1904) distance in their metrics. Our measure, v, is different from all of these. We use word alignment to compute the two permutations (LRscore also uses word alignment). The word alignment between the source input and reference is computed using GIZA++ (Och and Ney, 2003) beforehand with the default settings, then is refined with the heuristic grow-diag-finaland; the word alignment between the source input and the translation is generated by the decoder with the help of word alignment inside each phrase pair. PORT uses permutations. These encode one-toone relations but not one-to-many, many-to-one, many-to-many or null relations, all of which can occur in word alignments. We constrain the forbidden types of relation to become one-to-one, as in (Birch and Osborne, 2011). Thus, in a one-tomany alignment, the single source word is forced to align with the first target word; in a many-to-one alignment, monotone order is assumed for the target words; and source words originally aligned to null are aligned to the target word position just after the previous source word’s target position. After the normalization above, suppose we have two permutations for the same source n-word input. E.g., let P1 = reference, P2 = hypothesis: P1: 1 1p 2 1p 3 1p 4 1p … ip1 … n p1 P2: 1 2p 2 2p 3 2 p 4 2 p … ip2 … n p2 Here, each j ip is an integer denoting position in the original source (e.g., 1 1p = 7 means that the first word in P1 is the 7th source word). The ordering metric v is computed from two distance measures. The first is absolute permutation distance: ∑ = − = n i i i p p P P DIST 1 2 1 2 1 1 | | ) , ( (11) Let 2 /)1 ( ) , ( 1 2 1 1 1 + − = n n P P DIST ν (12) v1 ranges from 0 to 1; a larger value means more similarity between the two permutations. This metric is similar to Spearman’s ρ (Spearman, 1904). However, we have found that ρ punishes long-distance reorderings too heavily. For instance, 1 ν is more tolerant than ρ of the movement of “recently” in this example: Ref: Recently, I visited Paris Hyp: I visited Paris recently Inspired by HMM word alignment (Vogel et al., 1996), our second distance measure is based on jump width. This punishes a sequence of words that moves a long distance with its internal order conserved, only once rather than on every word. In the following, only two groups of words have moved, so the jump width punishment is light: Ref: In the winter of 2010, I visited Paris Hyp: I visited Paris in the winter of 2010 So the second distance measure is 932 ∑ = − − − − − = n i i i i i p p p p P P DIST 1 1 2 2 1 1 1 2 1 2 |) ( ) (| ) , ( (13) where we set 0 0 1 = p and 0 0 2 = p . Let 1 ) , ( 1 2 2 1 2 2 − − = n P P DIST v (14) As with v1, v2 is also from 0 to 1, and larger values indicate more similar permutations. The ordering measure vs is the harmonic mean of v1 and v2: ) / 1 / 1 /( 2 2 1 v v vs + = . (15) vs in (15) is computed at segment level. For multiple references, we compute vs for each, and then choose the biggest one as the segment level ordering similarity. We compute document level ordering with a weighted arithmetic mean: ∑ ∑ = = × = l s s l s s s R len R len v v 1 1 ) ( ) ( (16) where l is the number of segments of the document, and len(R) is the length of the reference. 2.2.3 Combined Metric Finally, Qmean(N) (Eq. (10) and the word ordering measure v are combined in a harmonic mean: α v N Qmean PORT / 1 ) ( / 1 2 + = (17) Here α is a free parameter that is tuned on heldout data. As it increases, the importance of the ordering measure v goes up. For our experiments, we tuned α on Chinese-English data, setting it to 0.25 and keeping this value for the other language pairs. The use of v means that unlike BLEU, PORT requires word alignment information. 3 Experiments 3.1 PORT as an Evaluation Metric We studied PORT as an evaluation metric on WMT data; test sets include WMT 2008, WMT 2009, and WMT 2010 all-to-English, plus 2009, 2010 English-to-all submissions. The languages “all” (“xx” in Table 1) include French, Spanish, German and Czech. Table 1 summarizes the test set statistics. In order to compute the v part of PORT, we require source-target word alignments for the references and MT outputs. These aren’t included in WMT data, so we compute them with GIZA++. We used Spearman’s rank correlation coefficient ρ to measure correlation of the metric with systemlevel human judgments of translation. The human judgment score is based on the “Rank” only, i.e., how often the translations of the system were rated as better than those from other systems (CallisonBurch et al., 2008). Thus, BLEU, METEOR, and PORT were evaluated on how well their rankings correlated with the human ones. For the segment level, we follow (Callison-Burch et al., 2010) in using Kendall’s rank correlation coefficient τ. As shown in Table 2, we compared PORT with smoothed BLEU (mteval-v13a), and METEOR v1.0. Both BLEU and PORT perform matching of n-grams up to n = 4. Set Year Lang. #system #sent-pair Test1 2008 xx-en 43 7,804 Test2 2009 xx-en 45 15,087 Test3 2009 en-xx 40 14,563 Test4 2010 xx-en 53 15,964 Test5 2010 en-xx 32 18,508 Table 1: Statistics of the WMT dev and test sets. Metric Into-En Out-of-En sys. seg. sys. seg. BLEU 0.792 0.215 0.777 0.240 METEOR 0.834 0.231 0.835 0.225 PORT 0.801 0.236 0.804 0.242 Table 2: Correlations with human judgment on WMT PORT achieved the best segment level correlation with human judgment on both the “into English” and “out of English” tasks. At the system level, PORT is better than BLEU, but not as good as METEOR. This is because we designed PORT to carry out tuning; we did not optimize its performance as an evaluation metric, but rather, to optimize system tuning performance. There are some other possible reasons why PORT did not outperform METEOR v1.0 at system level. Most WMT submissions involve language pairs with similar word order, so the ordering factor v in PORT won’t play a big role. Also, v depends on source-target word alignments for reference and test sets. These alignments were performed by GIZA++ models trained on the test data only. 933 3.2 PORT as a Metric for Tuning 3.2.1 Experimental details The first set of experiments to study PORT as a tuning metric involved Chinese-to-English (zh-en); there were two data conditions. The first is the small data condition where FBIS2 is used to train the translation and reordering models. It contains 10.5M target word tokens. We trained two language models (LMs), which were combined loglinearly. The first is a 4-gram LM which is estimated on the target side of the texts used in the large data condition (below). The second is a 5gram LM estimated on English Gigaword. The large data condition uses training data from NIST3 2009 (Chinese-English track). All allowed bilingual corpora except UN, Hong Kong Laws and Hong Kong Hansard were used to train the translation model and reordering models. There are about 62.6M target word tokens. The same two LMs are used for large data as for small data, and the same development (“dev”) and test sets are also used. The dev set comprised mainly data from the NIST 2005 test set, and also some balanced-genre web-text from NIST. Evaluation was performed on NIST 2006 and 2008. Four references were provided for all dev and test sets. The third data condition is a French-to-English (fr-en). The parallel training data is from Canadian Hansard data, containing 59.3M word tokens. We used two LMs in loglinear combination: a 4-gram LM trained on the target side of the parallel training data, and the English Gigaword 5-gram LM. The dev set has 1992 sentences; the two test sets have 2140 and 2164 sentences respectively. There is one reference for all dev and test sets. The fourth and fifth conditions involve German-English Europarl data. This parallel corpus contains 48.5M German tokens and 50.8M English tokens. We translate both German-to-English (deen) and English-to-German (en-de). The two conditions both use an LM trained on the target side of the parallel training data, and de-en also uses the English Gigaword 5-gram LM. News test 2008 set is used as dev set; News test 2009, 2010, 2011 are used as test sets. One reference is provided for all dev and test sets. 2 LDC2003E14 3 http://www.nist.gov/speech/tests/mt All experiments were carried out with α in Eq. (17) set to 0.25, and involved only lowercase European-language text. They were performed with MOSES (Koehn et al., 2007), whose decoder includes lexicalized reordering, translation models, language models, and word and phrase penalties. Tuning was done with n-best MERT, which is available in MOSES. In all tuning experiments, both BLEU and PORT performed lower case matching of n-grams up to n = 4. We also conducted experiments with tuning on a version of BLEU that incorporates SBP (Chiang et al., 2008) as a baseline. The results of original IBM BLEU and BLEU with SBP were tied; to save space, we only report results for original IBM BLEU here. 3.2.2 Comparisons with automatic metrics First, let us see if BLEU-tuning and PORT-tuning yield systems with different translations for the same input. The first row of Table 3 shows the percentage of identical sentence outputs for the two tuning types on test data. The second row shows the similarity of the two outputs at wordlevel (as measured by 1-TER): e.g., for the two zhen tasks, the two tuning types give systems whose outputs are about 25-30% different at the word level. By contrast, only about 10% of output words for fr-en differ for BLEU vs. PORT tuning. zh-en small zh-en large fr-en Hans de-en WMT en-de WMT Same sent. 17.7% 13.5% 56.6% 23.7% 26.1% 1-TER 74.2 70.9 91.6 87.1 86.6 Table 3: Similarity of BLEU-tuned and PORT-tuned system outputs on test data. Task Tune Evaluation metrics (%) BLEU MTR 1-TER PORT zh-en small BLEU PORT 26.8 27.2* 55.2 55.7 38.0 38.0 49.7 50.0 zh-en large BLEU PORT 29.9 30.3* 58.4 59.0 41.2 42.0 53.0 53.2 fr-en Hans BLEU PORT 38.8 38.8 69.8 69.6 54.2 54.6 57.1 57.1 de-en WMT BLEU PORT 20.1 20.3 55.6 56.0 38.4 38.4 39.6 39.7 en-de WMT BLEU PORT 13.6 13.6 43.3 43.3 30.1 30.7 31.7 31.7 Table 4: Automatic evaluation scores on test data. * indicates the results are significantly better than the baseline (p<0.05). 934 Table 4 shows translation quality for BLEU- and PORT-tuned systems, as assessed by automatic metrics. We employed BLEU4, METEOR (v1.0), TER (v0.7.25), and the new metric PORT. In the table, TER scores are presented as 1-TER to ensure that for all metrics, higher scores mean higher quality. All scores are averages over the relevant test sets. There are twenty comparisons in the table. Among these, there is one case (FrenchEnglish assessed with METEOR) where BLEU outperforms PORT, there are seven ties, and there are twelve cases where PORT is better. Table 3 shows that fr-en outputs are very similar for both tuning types, so the fr-en results are perhaps less informative than the others. Overall, PORT tuning has a striking advantage over BLEU tuning. Both (Liu et al., 2011) and (Cer et al., 2011) showed that with MERT, if you want the best possible score for a system’s translations according to metric M, then you should tune with M. This doesn’t appear to be true when PORT and BLEU tuning are compared in Table 4. For the two Chinese-to-English tasks in the table, PORT tuning yields a better BLEU score than BLEU tuning, with significance at p < 0.05. We are currently investigating why PORT tuning gives higher BLEU scores than BLEU tuning for ChineseEnglish and German-English. In internal tests we have found no systematic difference in dev-set BLEUs, so we speculate that PORT’s emphasis on reordering yields models that generalize better for these two language pairs. 3.2.3 Human Evaluation We conducted a human evaluation on outputs from BLEU- and PORT-tuned systems. The examples are randomly picked from all “to-English” conditions shown in Tables 3 & 4 (i.e., all conditions except English-to-German). We performed pairwise comparison of the translations produced by the system types as in (Callison-Burch et al., 2010; Callison-Burch et al., 2011). First, we eliminated examples where the reference had fewer than 10 words or more than 50 words, or where outputs of the BLEU-tuned and PORT-tuned systems were identical. The evaluators (colleagues not involved with this paper) objected to comparing two bad translations, so we then selected for human evaluation only translations that had high sentence-level (1-TER) scores. To be fair to both metrics, for each condition, we took the union of examples whose BLEU-tuned output was in the top n% of BLEU outputs and those whose PORT-tuned output was in the top n% of PORT outputs (based on (1TER)). The value of n varied by condition: we chose the top 20% of zh-en small, top 20% of ende, top 50% of fr-en and top 40% of zh-en large. We then randomly picked 450 of these examples to form the manual evaluation set. This set was split into 15 subsets, each containing 30 sentences. The first subset was used as a common set; each of the other 14 subsets was put in a separate file, to which the common set is added. Each of the 14 evaluators received one of these files, containing 60 examples (30 unique examples and 30 examples shared with the other evaluators). Within each example, BLEU-tuned and PORT-tuned outputs were presented in random order. After receiving the 14 annotated files, we computed Fleiss’s Kappa (Fleiss, 1971) on the common set to measure inter-annotator agreement, all κ . Then, we excluded annotators one at a time to compute i κ (Kappa score without i-th annotator, i.e., from the other 13). Finally, we filtered out the files from the 4 annotators whose answers were most different from everybody else’s: i.e., annotators with the biggest i all κ κ − values. This left 10 files from 10 evaluators. We threw away the common set in each file, leaving 300 pairwise comparisons. Table 5 shows that the evaluators preferred the output from the PORTtuned system 136 times, the output from the BLEU-tuned one 98 times, and had no preference the other 66 times. This indicates that there is a human preference for outputs from the PORTtuned system over those from the BLEU-tuned system at the p<0.01 significance level (in cases where people prefer one of them). PORT tuning seems to have a bigger advantage over BLEU tuning when the translation task is hard. Of the Table 5 language pairs, the one where PORT tuning helps most has the lowest BLEU in Table 4 (German-English); the one where it helps least in Table 5 has the highest BLEU in Table 4 (French-English). (Table 5 does not prove BLEU is superior to PORT for French-English tuning: statistically, the difference between 14 and 17 here is a tie). Maybe by picking examples for each condition that were the easiest for the system to translate (to make human evaluation easier), we 935 mildly biased the results in Table 5 against PORT tuning. Another possible factor is reordering. PORT differs from BLEU partly in modeling longdistance reordering more accurately; English and French have similar word order, but the other two language pairs don’t. The results in section 3.3 (below) for Qmean, a version of PORT without word ordering factor v, suggest v may be defined suboptimally for French-English. PORT win BLEU win equal total zh-en small 19 38.8% 18 36.7% 12 24.5% 49 zh-en large 69 45.7% 46 30.5% 36 23.8% 151 fr-en Hans 14 32.6% 17 39.5% 12 27.9% 43 de-en WMT 34 59.7% 17 29.8% 6 10.5% 57 All 136 45.3% 98 32.7% 66 22.0% 300 Table 5: Human preference for outputs from PORTtuned vs. BLEU-tuned system. 3.2.4 Computation time A good tuning metric should run very fast; this is one of the advantages of BLEU. Table 6 shows the time required to score the 100-best hypotheses for the dev set for each data condition during MERT for BLEU and PORT in similar implementations. The average time of each iteration, including model loading, decoding, scoring and running MERT4, is in brackets. PORT takes roughly 1.5 – 2.5 as long to compute as BLEU, which is reasonable for a tuning metric. zh-en small zh-en large fr-en Hans de-en WMT en-de WMT BLEU 3 (13) 3 (17) 2 (19) 2 (20) 2 (11) PORT 5 (21) 5 (24) 4 (28) 5 (28) 4 (15) Table 6: Time to score 100-best hypotheses (average time per iteration) in minutes. 3.2.5 Robustness to word alignment errors PORT, unlike BLEU, depends on word alignments. How does quality of word alignment between source and reference affect PORT tuning? We created a dev set from Chinese Tree Bank 4 Our experiments are run on a cluster. The average time for an iteration includes queuing, and the speed of each node is slightly different, so bracketed times are only for reference. (CTB) hand-aligned data. It contains 588 sentences (13K target words), with one reference. We also ran GIZA++ to obtain its automatic word alignment, computed on CTB and FBIS. The AER of the GIZA++ word alignment on CTB is 0.32. In Table 7, CTB is the dev set. The table shows tuning with BLEU, PORT with human word alignment (PORT + HWA), and PORT with GIZA++ word alignment (PORT + GWA); the condition is zh-en small. Despite the AER of 0.32 for automatic word alignment, PORT tuning works about as well with this alignment as for the gold standard CTB one. (The BLEU baseline in Table 7 differs from the Table 4 BLEU baseline because the dev sets differ). Tune BLEU MTR 1-TER PORT BLEU 25.1 53.7 36.4 47.8 PORT + HWA 25.3 54.4 37.0 48.2 PORT + GWA 25.3 54.6 36.4 48.1 Table 7: PORT tuning - human & GIZA++ alignment Task Tune BLEU MTR 1-TER PORT zh-en small BLEU PORT Qmean 26.8 27.2 26.8 55.2 55.7 55.3 38.0 38.0 38.2 49.7 50.0 49.8 zh-en large BLEU PORT Qmean 29.9 30.3 30.2 58.4 59.0 58.5 41.2 42.0 41.8 53.0 53.2 53.1 fr-en Hans BLEU PORT Qmean 38.8 38.8 38.8 69.8 69.6 69.8 54.2 54.6 54.6 57.1 57.1 57.1 de-en WMT BLEU PORT Qmean 20.1 20.3 20.3 55.6 56.0 56.3 38.4 38.4 38.1 39.6 39.7 39.7 en-de WMT BLEU PORT Qmean 13.6 13.6 13.6 43.3 43.3 43.4 30.1 30.7 30.3 31.7 31.7 31.7 Table 8: Impact of ordering measure v on PORT 3.3 Analysis Now, we look at the details of PORT to see which of them are the most important. We do not have space here to describe all the details we studied, but we can describe some of them. E.g., does the ordering measure v help tuning performance? To answer this, we introduce an intermediate metric. This is Qmean as in Eq. (10): PORT without the ordering measure. Table 8 compares tuning with BLEU, PORT, and Qmean. PORT outperforms Qmean on seven of the eight automatic scores shown for small and large Chinese-English. 936 However, for the European language pairs, PORT and Qmean seem to be tied. This may be because we optimized α in Eq. (18) for Chinese-English, making the influence of word ordering measure v in PORT too strong for the European pairs, which have similar word order. Measure v seems to help Chinese-English tuning. What would results be on that language pair if we were to replace v in PORT with another ordering measure? Table 9 gives a partial answer, with Spearman’s ρ and Kendall’s τ replacing v with ρ or τ in PORT for the zh-en small condition (CTB with human word alignment is the dev set). The original definition of PORT seems preferable. Tune BLEU METEOR 1-TER BLEU 25.1 53.7 36.4 PORT(v) 25.3 54.4 37.0 PORT(ρ) 25.1 54.2 36.3 PORT(τ) 25.1 54.0 36.0 Table 9: Comparison of the ordering measure: replacing ν with ρ or τ in PORT. Task Tune ordering measures ρ τ v NIST06 BLEU PORT 0.979 0.979 0.926 0.928 0.915 0.917 NIST08 BLEU PORT 0.980 0.981 0.926 0.929 0.916 0.918 CTB BLEU PORT 0.973 0.975 0.860 0.866 0.847 0.853 Table 10: Ordering scores (ρ, τ and v) for test sets NIST 2006, 2008 and CTB. A related question is how much word ordering improvement we obtained from tuning with PORT. We evaluate Chinese-English word ordering with three measures: Spearman’s ρ, Kendall’s τ distance as applied to two permutations (see section 2.2.2) and our own measure v. Table 10 shows the effects of BLEU and PORT tuning on these three measures, for three test sets in the zh-en large condition. Reference alignments for CTB were created by humans, while the NIST06 and NIST08 reference alignments were produced with GIZA++. A large value of ρ, τ, or v implies outputs have ordering similar to that in the reference. From the table, we see that the PORT-tuned system yielded better word order than the BLEU-tuned system in all nine combinations of test sets and ordering measures. The advantage of PORT tuning is particularly noticeable on the most reliable test set: the hand-aligned CTB data. What is the impact of the strict redundancy penalty on PORT? Note that in Table 8, even though Qmean has no ordering measure, it outperforms BLEU. Table 11 shows the BLEU brevity penalty (BP) and (number of matching 1- & 4- grams)/(number of total 1- & 4- grams) for the translations. The BLEU-tuned and Qmeantuned systems generate similar numbers of matching n-grams, but Qmean-tuned systems produce fewer n-grams (thus, shorter translations). E.g., for zh-en small, the BLEU-tuned system produced 44,677 1-grams (words), while the Qmean-trained system one produced 43,555 1grams; both have about 32,000 1-grams matching the references. Thus, the Qmean translations have higher precision. We believe this is because of the strict redundancy penalty in Qmean. As usual, French-English is the outlier: the two outputs here are typically so similar that BLEU and Qmean tuning yield very similar n-gram statistics. Task Tune 1-gram 4-gram BP zh-en small BLEU Qmean 32055/44677 31996/43555 4603/39716 4617/38595 0.967 0.962 zh-en large BLEU Qmean 34583/45370 34369/44229 5954/40410 5987/39271 0.972 0.959 fr-en Hans BLEU Qmean 28141/40525 28167/40798 8654/34224 8695/34495 0.983 0.990 de-en WMT BLEU Qmean 42380/75428 42173/72403 5151/66425 5203/63401 1.000 0.968 en-de WMT BLEU Qmean 30326/62367 30343/62092 2261/54812 2298/54537 1.000 0.997 Table 11: #matching-ngram/#total-ngram and BP score 4 Conclusions In this paper, we have proposed a new tuning metric for SMT systems. PORT incorporates precision, recall, strict brevity penalty and strict redundancy penalty, plus a new word ordering measure v. As an evaluation metric, PORT performed better than BLEU at the system level and the segment level, and it was competitive with or slightly superior to METEOR at the segment level. Most important, our results show that PORTtuned MT systems yield better translations than BLEU-tuned systems on several language pairs, according both to automatic metrics and human evaluations. In future work, we plan to tune the free parameter α for each language pair. 937 References S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of ACL Workshop on Intrinsic & Extrinsic Evaluation Measures for Machine Translation and/or Summarization. A. Birch and M. Osborne. 2011. Reordering Metrics for MT. In Proceedings of ACL. C. Callison-Burch, C. Fordyce, P. Koehn, C. Monz and J. Schroeder. 2008. Further Meta-Evaluation of Machine Translation. In Proceedings of WMT. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Re-evaluating the role of BLEU in machine translation research. In Proceedings of EACL. C. Callison-Burch, P. Koehn, C. Monz, K. Peterson, M. Przybocki and O. Zaidan. 2010. Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation. In Proceedings of WMT. C. Callison-Burch, P. Koehn, C. Monz and O. Zaidan. 2011. Findings of the 2011 Workshop on Statistical Machine Translation. In Proceedings of WMT. D. Cer, D. Jurafsky and C. Manning. 2010. The Best Lexical Metric for Phrase-Based Statistical MT System Optimization. In Proceedings of NAACL. Y. S. Chan and H. T. Ng. 2008. MAXSIM: A maximum similarity metric for machine translation evaluation. In Proceedings of ACL. B. Chen and R. Kuhn. 2011. AMBER: A Modified BLEU, Enhanced Ranking Metric. In: Proceedings of WMT. Edinburgh, UK. July. D. Chiang, S. DeNeefe, Y. S. Chan, and H. T. Ng. 2008. Decomposability of translation metrics for improved evaluation and efficient algorithms. In Proceedings of EMNLP, pages 610–619. M. Denkowski and A. Lavie. 2010. Meteor-next and the meteor paraphrase tables: Improved evaluation support for five target languages. In Proceedings of the Joint Fifth Workshop on SMT and MetricsMATR, pages 314–317. G. Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of HLT. J. L. Fleiss. 1971. Measuring nominal scale agreement among many raters. In Psychological Bulletin, Vol. 76, No. 5 pp. 378–382. Y. He, J. Du, A. Way and J. van Genabith. 2010. The DCU dependency-based metric in WMTMetricsMATR 2010. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 324–328. H. Isozaki, T. Hirao, K. Duh, K. Sudoh, H. Tsukada. 2010. Automatic Evaluation of Translation Quality for Distant Language Pairs. In Proceedings of EMNLP. M. Kendall. 1938. A New Measure of Rank Correlation. In Biometrika, 30 (1–2), pp. 81–89. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin and E. Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of ACL, pp. 177-180, Prague, Czech Republic. A. Lavie and M. J. Denkowski. 2009. The METEOR metric for automatic evaluation of machine translation. Machine Translation, 23. C. Liu, D. Dahlmeier, and H. T. Ng. 2010. TESLA: Translation evaluation of sentences with linearprogramming-based analysis. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 329–334. C. Liu, D. Dahlmeier, and H. T. Ng. 2011. Better evaluation metrics lead to better machine translation. In Proceedings of EMNLP. C. Lo and D. Wu. 2011. MEANT: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility based on semantic roles. In Proceedings of ACL. F. J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL-2003. Sapporo, Japan. F. J. Och and H. Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. In Computational Linguistics, 29, pp. 19–51. S. Pado, M. Galley, D. Jurafsky, and C.D. Manning. 2009. Robust machine translation evaluation with entailment features. In Proceedings of ACL-IJCNLP. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL. K. Parton, J. Tetreault, N. Madnani and M. Chodorow. 2011. E-rating Machine Translation. In Proceedings of WMT. M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A Study of Translation Edit Rate 938 with Targeted Human Annotation. In Proceedings of Association for Machine Translation in the Americas. M. Snover, N. Madnani, B. Dorr, and R. Schwartz. 2009. Fluency, Adequacy, or HTER? Exploring Different Human Judgments with a Tunable MT Metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, Athens, Greece. C. Spearman. 1904. The proof and measurement of association between two things. In American Journal of Psychology, 15, pp. 72–101. S. Vogel, H. Ney, and C. Tillmann. 1996. HMM based word alignment in statistical translation. In Proceedings of COLING. 939
2012
98
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 940–949, Jeju, Republic of Korea, 8-14 July 2012. c⃝2012 Association for Computational Linguistics Mixing Multiple Translation Models in Statistical Machine Translation Majid Razmara1 George Foster2 Baskaran Sankaran1 Anoop Sarkar1 1 Simon Fraser University, 8888 University Dr., Burnaby, BC, Canada {razmara,baskaran,anoop}@sfu.ca 2 National Research Council Canada, 283 Alexandre-Tach´e Blvd, Gatineau, QC, Canada [email protected] Abstract Statistical machine translation is often faced with the problem of combining training data from many diverse sources into a single translation model which then has to translate sentences in a new domain. We propose a novel approach, ensemble decoding, which combines a number of translation systems dynamically at the decoding step. In this paper, we evaluate performance on a domain adaptation setting where we translate sentences from the medical domain. Our experimental results show that ensemble decoding outperforms various strong baselines including mixture models, the current state-of-the-art for domain adaptation in machine translation. 1 Introduction Statistical machine translation (SMT) systems require large parallel corpora in order to be able to obtain a reasonable translation quality. In statistical learning theory, it is assumed that the training and test datasets are drawn from the same distribution, or in other words, they are from the same domain. However, bilingual corpora are only available in very limited domains and building bilingual resources in a new domain is usually very expensive. It is an interesting question whether a model that is trained on an existing large bilingual corpus in a specific domain can be adapted to another domain for which little parallel data is present. Domain adaptation techniques aim at finding ways to adjust an out-of-domain (OUT) model to represent a target domain (in-domain or IN). Common techniques for model adaptation adapt two main components of contemporary state-of-theart SMT systems: the language model and the translation model. However, language model adaptation is a more straight-forward problem compared to translation model adaptation, because various measures such as perplexity of adapted language models can be easily computed on data in the target domain. As a result, language model adaptation has been well studied in various work (Clarkson and Robinson, 1997; Seymore and Rosenfeld, 1997; Bacchiani and Roark, 2003; Eck et al., 2004) both for speech recognition and for machine translation. It is also easier to obtain monolingual data in the target domain, compared to bilingual data which is required for translation model adaptation. In this paper, we focused on adapting only the translation model by fixing a language model for all the experiments. We expect domain adaptation for machine translation can be improved further by combining orthogonal techniques for translation model adaptation combined with language model adaptation. In this paper, a new approach for adapting the translation model is proposed. We use a novel system combination approach called ensemble decoding in order to combine two or more translation models with the goal of constructing a system that outperforms all the component models. The strength of this system combination method is that the systems are combined in the decoder. This enables the decoder to pick the best hypotheses for each span of the input. The main applications of ensemble models are domain adaptation, domain mixing and system combination. We have modified Kriya (Sankaran et al., 2012), an in-house implementation of hierarchical phrase-based translation system (Chiang, 2005), to implement ensemble decoding using multiple translation models. We compare the results of ensemble decoding with a number of baselines for domain adaptation. In addition to the basic approach of concatenation of in-domain and out-of-domain data, we also trained a log-linear mixture model (Foster and Kuhn, 2007) 940 as well as the linear mixture model of (Foster et al., 2010) for conditional phrase-pair probabilities over IN and OUT. Furthermore, within the framework of ensemble decoding, we study and evaluate various methods for combining translation tables. 2 Baselines The natural baseline for model adaption is to concatenate the IN and OUT data into a single parallel corpus and train a model on it. In addition to this baseline, we have experimented with two more sophisticated baselines which are based on mixture techniques. 2.1 Log-Linear Mixture Log-linear translation model (TM) mixtures are of the form: p(¯e| ¯f) ∝exp  M X m λm log pm(¯e| ¯f)  where m ranges over IN and OUT, pm(¯e| ¯f) is an estimate from a component phrase table, and each λm is a weight in the top-level log-linear model, set so as to maximize dev-set BLEU using minimum error rate training (Och, 2003). We learn separate weights for relative-frequency and lexical estimates for both pm(¯e| ¯f) and pm( ¯f|¯e). Thus, for 2 component models (from IN and OUT training corpora), there are 4 ∗2 = 8 TM weights to tune. Whenever a phrase pair does not appear in a component phrase table, we set the corresponding pm(¯e| ¯f) to a small epsilon value. 2.2 Linear Mixture Linear TM mixtures are of the form: p(¯e| ¯f) = M X m λmpm(¯e| ¯f) Our technique for setting λm is similar to that outlined in Foster et al. (2010). We first extract a joint phrase-pair distribution ˜p(¯e, ¯f) from the development set using standard techniques (HMM word alignment with grow-diag-and symmeterization (Koehn et al., 2003)). We then find the set of weights ˆλ that minimize the cross-entropy of the mixture p(¯e| ¯f) with respect to ˜p(¯e, ¯f): ˆλ = argmax λ X ¯e, ¯f ˜p(¯e, ¯f) log M X m λmpm(¯e| ¯f) For efficiency and stability, we use the EM algorithm to find ˆλ, rather than L-BFGS as in (Foster et al., 2010). Whenever a phrase pair does not appear in a component phrase table, we set the corresponding pm(¯e| ¯f) to 0; pairs in ˜p(¯e, ¯f) that do not appear in at least one component table are discarded. We learn separate linear mixtures for relative-frequency and lexical estimates for both p(¯e| ¯f) and p( ¯f|¯e). These four features then appear in the top-level model as usual – there is no runtime cost for the linear mixture. 3 Ensemble Decoding Ensemble decoding is a way to combine the expertise of different models in one single model. The current implementation is able to combine hierarchical phrase-based systems (Chiang, 2005) as well as phrase-based translation systems (Koehn et al., 2003). However, the method can be easily extended to support combining a number of heterogeneous translation systems e.g. phrase-based, hierarchical phrase-based, and/or syntax-based systems. This section explains how such models can be combined during the decoding. Given a number of translation models which are already trained and tuned, the ensemble decoder uses hypotheses constructed from all of the models in order to translate a sentence. We use the bottomup CKY parsing algorithm for decoding. For each sentence, a CKY chart is constructed. The cells of the CKY chart are populated with appropriate rules from all the phrase tables of different components. As in the Hiero SMT system (Chiang, 2005), the cells which span up to a certain length (i.e. the maximum span length) are populated from the phrasetables and the rest of the chart uses glue rules as defined in (Chiang, 2005). The rules suggested from the component models are combined in a single set. Some of the rules may be unique and others may be common with other component model rule sets, though with different scores. Therefore, we need to combine the scores of such common rules and assign a single score to 941 them. Depending on the mixture operation used for combining the scores, we would get different mixture scores. The choice of mixture operation will be discussed in Section 3.1. Figure 1 illustrates how the CKY chart is filled with the rules. Each cell, covering a span, is populated with rules from all component models as well as from cells covering a sub-span of it. In the typical log-linear model SMT, the posterior probability for each phrase pair (¯e, ¯f) is given by: p(¯e | ¯f) ∝exp  X i wiφi(¯e, ¯f) | {z } w·φ  Ensemble decoding uses the same framework for each individual system. Therefore, the score of a phrase-pair (¯e, ¯f) in the ensemble model is: p(¯e | ¯f) ∝exp  w1 · φ1 | {z } 1st model ⊕ w2 · φ2 | {z } 2nd model ⊕· · ·  where ⊕denotes the mixture operation between two or more model scores. 3.1 Mixture Operations Mixture operations receive two or more scores (probabilities) and return the mixture score (probability). In this section, we explore different options for mixture operation and discuss some of the characteristics of these mixture operations. • Weighted Sum (wsum): in wsum the ensemble probability is proportional to the weighted sum of all individual model probabilities (i.e. linear mixture). p(¯e | ¯f) ∝ M X m λm exp wm · φm  where m denotes the index of component models, M is the total number of them and λi is the weight for component i. • Weighted Max (wmax): where the ensemble score is the weighted max of all model scores. p(¯e | ¯f) ∝ max m λm exp wm · φm  • Model Switching (Switch): in model switching, each cell in the CKY chart gets populated only by rules from one of the models and the other models’ rules are discarded. This is based on the hypothesis that each component model is an expert on certain parts of sentence. In this method, we need to define a binary indicator function δ( ¯f, m) for each span and component model to specify rules of which model to retain for each span. δ( ¯f, m) =    1, m = argmax n∈M ψ( ¯f, n) 0, otherwise The criteria for choosing a model for each cell, ψ( ¯f, n), could be based on: – Max: for each cell, the model that has the highest weighted best-rule score wins: ψ( ¯f, n) = λn max e (wn · φn(¯e,¯f)) – Sum: Instead of comparing only the scores of the best rules, the model with the highest weighted sum of the probabilities of the rules wins. This sum has to take into account the translation table limit (ttl), on the number of rules suggested by each model for each cell: ψ( ¯f, n) = λn X ¯e exp wn · φn(¯e,¯f)  The probability of each phrase-pair (¯e, ¯f) is computed as: p(¯e | ¯f) = M X m δ( ¯f, m) pm(¯e | ¯f) • Product (prod): in Product models or Product of Experts (Hinton, 1999), the probability of the ensemble model or a rule is computed as the product of the probabilities of all components (or equally the sum of log-probabilities, i.e. log-linear mixture). Product models can also make use of weights to control the contribution of each component. These models are 942 Figure 1: The cells in the CKY chart are populated using rules from all component models and sub-span cells. generally known as Logarithmic Opinion Pools (LOPs) where: p(¯e | ¯f) ∝ exp M X m λm (wm · φm)  Product models have been used in combining LMs and TMs in SMT as well as some other NLP tasks such as ensemble parsing (Petrov, 2010). Each of these mixture operations has a specific property that makes it work in specific domain adaptation or system combination scenarios. For instance, LOPs may not be optimal for domain adaptation in the setting where there are two or more models trained on heterogeneous corpora. As discussed in (Smith et al., 2005), LOPs work best when all the models accuracies are high and close to each other with some degree of diversity. LOPs give veto power to any of the component models and this perfectly works for settings such as the one in (Petrov, 2010) where a number of parsers are trained by changing the randomization seeds but having the same base parser and using the same training set. They noticed that parsers trained using different randomization seeds have high accuracies but there are some diversities among them and they used product models for their advantage to get an even better parser. We assume that each of the models is expert in some parts and so they do not necessarily agree on correct hypotheses. In other words, product models (or LOPs) tend to have intersection-style effects while we are more interested in union-style effects. In Section 4.2, we compare the BLEU scores of different mixture operations on a French-English experimental setup. 3.2 Normalization Since in log-linear models, the model scores are not normalized to form probability distributions, the scores that different models assign to each phrasepair may not be in the same scale. Therefore, mixing their scores might wash out the information in one (or some) of the models. We experimented with two different ways to deal with this normalization issue. A practical but inexact heuristic is to normalize the scores over a shorter list. So the list of rules coming from each model for a cell in CKY chart is normalized before getting mixed with other phrase-table rules. However, experiments showed changing the scores with the normalized scores hurts the BLEU score radically. So we use the normalized scores only for pruning and the actual scores are intact. We could also globally normalize the scores to obtain posterior probabilities using the inside-outside algorithm. However, we did not try it as the BLEU scores we got using the normalization heuristic was not promissing and it would impose a cost in decoding as well. More investigation on this issue has been left for future work. A more principled way is to systematically find the most appropriate model weights that can avoid this problem by scaling the scores properly. We used a publicly available toolkit, CONDOR (Vanden Berghen and Bersini, 2005), a direct optimizer based on Powell’s algorithm, that does not require 943 explicit gradient information for the objective function. Component weights for each mixture operation are optimized on the dev-set using CONDOR. 4 Experiments & Results 4.1 Experimental Setup We carried out translation experiments using the European Medicines Agency (EMEA) corpus (Tiedemann, 2009) as IN, and the Europarl (EP) corpus1 as OUT, for French to English translation. The dev and test sets were randomly chosen from the EMEA corpus.2 The details of datasets used are summarized in Table 1. Dataset Sents Words French English EMEA 11770 168K 144K Europarl 1.3M 40M 37M Dev 1533 29K 25K Test 1522 29K 25K Table 1: Training, dev and test sets for EMEA. For the mixture baselines, we used a standard one-pass phrase-based system (Koehn et al., 2003), Portage (Sadat et al., 2005), with the following 7 features: relative-frequency and lexical translation model (TM) probabilities in both directions; worddisplacement distortion model; language model (LM) and word count. The corpus was word-aligned using both HMM and IBM2 models, and the phrase table was the union of phrases extracted from these separate alignments, with a length limit of 7. It was filtered to retain the top 20 translations for each source phrase using the TM part of the current loglinear model. For ensemble decoding, we modified an in-house implementation of hierarchical phrase-based system, Kriya (Sankaran et al., 2012) which uses the same features mentioned in (Chiang, 2005): forward and backward relative-frequency and lexical TM probabilities; LM; word, phrase and glue-rules penalty. GIZA++(Och and Ney, 2000) has been used for word alignment with phrase length limit of 7. In both systems, feature weights were optimized using MERT (Och, 2003) and with a 5-gram lan1www.statmt.org/europarl 2Please contact the authors to access the data-sets. guage model and Kneser-Ney smoothing was used in all the experiments. We used SRILM (Stolcke, 2002) as the langugage model toolkit. Fixing the language model allows us to compare various translation model combination techniques. 4.2 Results Table 2 shows the results of the baselines. The first group are the baseline results on the phrase-based system discussed in Section 2 and the second group are those of our hierarchical MT system. Since the Hiero baselines results were substantially better than those of the phrase-based model, we also implemented the best-performing baseline, linear mixture, in our Hiero-style MT system and in fact it achieves the hights BLEU score among all the baselines as shown in Table 2. This baseline is run three times the score is averaged over the BLEU scores with standard deviation of 0.34. Baseline PBS Hiero IN 31.84 33.69 OUT 24.08 25.32 IN + OUT 31.75 33.76 LOGLIN 32.21 – LINMIX 33.81 35.57 Table 2: The results of various baselines implemented in a phrase-based (PBS) and a Hiero SMT on EMEA. Table 3 shows the results of ensemble decoding with different mixture operations and model weight settings. Each mixture operation has been evaluated on the test-set by setting the component weights uniformly (denoted by uniform) and by tuning the weights using CONDOR (denoted by tuned) on a held-out set. The tuned scores (3rd column in Table 3) are averages of three runs with different initial points as in Clark et al. (2011). We also reported the BLEU scores when we applied the span-wise normalization heuristic. All of these mixture operations were able to significantly improve over the concatenation baseline. In particular, Switching:Max could gain up to 2.2 BLEU points over the concatenation baseline and 0.39 BLEU points over the best performing baseline (i.e. linear mixture model implemented in Hiero) which is statistically significant based on Clark et al. (2011) (p = 0.02). Prod when using with uniform weights gets the 944 Mixture Operation Uniform Tuned Norm. WMAX 35.39 35.47 (s=0.03) 35.47 WSUM 35.35 35.53 (s=0.04) 35.45 SWITCHING:MAX 35.93 35.96 (s=0.01) 32.62 SWITCHING:SUM 34.90 34.72 (s=0.23) 34.90 PROD 33.93 35.24 (s=0.05) 35.02 Table 3: The results of ensemble decoding on EMEA for Fr2En when using uniform weights, tuned weights and normalization heuristic. The tuned BLEU scores are averaged over three runs with multiple initial points, as in (Clark et al., 2011), with the standard deviations in brackets . lowest score among the mixture operations, however after tuning, it learns to bias the weights towards one of the models and hence improves by 1.31 BLEU points. Although Switching:Sum outperforms the concatenation baseline, it is substantially worse than other mixture operations. One explanation that Switching:Max is the best performing operation and Switching:Sum is the worst one, despite their similarities, is that Switching:Max prefers more peaked distributions while Switching:Sum favours a model that has fewer hypotheses for each span. An interesting observation based on the results in Table 3 is that uniform weights are doing reasonably well given that the component weights are not optimized and therefore model scores may not be in the same scope (refer to discussion in §3.2). We suspect this is because a single LM is shared between both models. This shared component controls the variance of the weights in the two models when combined with the standard L-1 normalization of each model’s weights and hence prohibits models to have too varied scores for the same input. Though, it may not be the case when multiple LMs are used which are not shared. Two sample sentences from the EMEA test-set along with their translations by the IN, OUT and Ensemble models are shown in Figure 2. The boxes show how the Ensemble model is able to use ngrams from the IN and OUT models to construct a better translation than both of them. In the first example, there are two OOVs one for each of the IN and OUT models. Our approach is able to resolve the OOV issues by taking advantage of the other model’s presence. Similarly, the second example shows how ensemble decoding improves lexical choices as well as word re-orderings. 5 Related Work 5.1 Domain Adaptation Early approaches to domain adaptation involved information retrieval techniques where sentence pairs related to the target domain were retrieved from the training corpus using IR methods (Eck et al., 2004; Hildebrand et al., 2005). Foster et al. (2010), however, uses a different approach to select related sentences from OUT. They use language model perplexities from IN to select relavant sentences from OUT. These sentences are used to enrich the IN training set. Other domain adaptation methods involve techniques that distinguish between general and domainspecific examples (Daum´e and Marcu, 2006). Jiang and Zhai (2007) introduce a general instance weighting framework for model adaptation. This approach tries to penalize misleading training instances from OUT and assign more weight to IN-like instances than OUT instances. Foster et al. (2010) propose a similar method for machine translation that uses features to capture degrees of generality. Particularly, they include the output from an SVM classifier that uses the intersection between IN and OUT as positive examples. Unlike previous work on instance weighting in machine translation, they use phraselevel instances instead of sentences. A large body of work uses interpolation techniques to create a single TM/LM from interpolating a number of LMs/TMs. Two famous examples of such methods are linear mixtures and log-linear mixtures (Koehn and Schroeder, 2007; Civera and Juan, 2007; Foster and Kuhn, 2007) which were used as baselines and discussed in Section 2. Other methods include using self-training techniques to exploit monolingual in-domain data (Ueffing et al., 2007; 945 SOURCE am´enorrh´ee , menstruations irr´eguli`eres REF amenorrhoea , irregular menstruation IN amenorrhoea , menstruations irr´eguli`eres OUT am´enorrh´ee , irregular menstruation ENSEMBLE amenorrhoea , irregular menstruation SOURCE le traitement par naglazyme doit ˆetre supervis´e par un m´edecin ayant l’ exp´erience de la prise en charge des patients atteints de mps vi ou d’ une autre maladie m´etabolique h´er´editaire . REF naglazyme treatment should be supervised by a physician experienced in the management of patients with mps vi or other inherited metabolic diseases . IN naglazyme treatment should be supervis´e by a doctor the with in the management of patients with mps vi or other hereditary metabolic disease . OUT naglazyme ’s treatment must be supervised by a doctor with the experience of the care of patients with mps vi. or another disease hereditary metabolic . ENSEMBLE naglazyme treatment should be supervised by a physician experienced in the management of patients with mps vi or other hereditary metabolic disease . Figure 2: Examples illustrating how this method is able to use expertise of both out-of-domain and in-domain systems. Bertoldi and Federico, 2009). In this approach, a system is trained on the parallel OUT and IN data and it is used to translate the monolingual IN data set. Iteratively, most confident sentence pairs are selected and added to the training corpus on which a new system is trained. 5.2 System Combination Tackling the model adaptation problem using system combination approaches has been experimented in various work (Koehn and Schroeder, 2007; Hildebrand and Vogel, 2009). Among these approaches are sentence-based, phrase-based and word-based output combination methods. In a similar approach, Koehn and Schroeder (2007) use a feature of the factored translation model framework in Moses SMT system (Koehn and Schroeder, 2007) to use multiple alternative decoding paths. Two decoding paths, one for each translation table (IN and OUT), were used during decoding. The weights are set with minimum error rate training (Och, 2003). Our work is closely related to Koehn and Schroeder (2007) but uses a different approach to deal with multiple translation tables. The Moses SMT system implements (Koehn and Schroeder, 2007) and can treat multiple translation tables in two different ways: intersection and union. In intersection, for each span only the hypotheses would be used that are present in all phrase tables. For each set of hypothesis with the same source and target phrases, a new hypothesis is created whose feature-set is the union of feature sets of all corresponding hypotheses. Union, on the other hand, uses hypotheses from all the phrase tables. The feature set of these hypotheses are expanded to include one feature set for each table. However, for the corresponding feature values of those phrase-tables that did not have a particular phrase-pair, a default log probability value of 0 is assumed (Bertoldi and Federico, 2009) which is counter-intuitive as it boosts the score of hypotheses with phrase-pairs that do not belong to all of the translation tables. Our approach is different from Koehn and Schroeder (2007) in a number of ways. Firstly, unlike the multi-table support of Moses which only supports phrase-based translation table combination, our approach supports ensembles of both hierarchical and phrase-based systems. With little modification, it can also support ensemble of syntax-based systems with the other two state-of-the-art SMT sys946 tems. Secondly, our combining method uses the union option, but instead of preserving the features of all phrase-tables, it only combines their scores using various mixture operations. This enables us to experiment with a number of different operations as opposed to sticking to only one combination method. Finally, by avoiding increasing the number of features we can add as many translation models as we need without serious performance drop. In addition, MERT would not be an appropriate optimizer when the number of features increases a certain amount (Chiang et al., 2008). Our approach differs from the model combination approach of DeNero et al. (2010), a generalization of consensus or minimum Bayes risk decoding where the search space consists of those of multiple systems, in that model combination uses forest of derivations of all component models to do the combination. In other words, it requires all component models to fully decode each sentence, compute n-gram expectations from each component model and calculate posterior probabilities over translation derivations. While, in our approach we only use partial hypotheses from component models and the derivation forest is constructed by the ensemble model. A major difference is that in the model combination approach the component search spaces are conjoined and they are not intermingled as opposed to our approach where these search spaces are intermixed on spans. This enables us to generate new sentences that cannot be generated by component models. Furthermore, various combination methods can be explored in our approach. Finally, main techniques used in this work are orthogonal to our approach such as Minimum Bayes Risk decoding, using n-gram features and tuning using MERT. Finally, our work is most similar to that of Liu et al. (2009) where max-derivation and maxtranslation decoding have been used. Maxderivation finds a derivation with highest score and max-translation finds the highest scoring translation by summing the score of all derivations with the same yield. The combination can be done in two levels: translation-level and derivation-level. Their derivation-level max-translation decoding is similar to our ensemble decoding with wsum as the mixture operation. We did not restrict ourself to this particular mixture operation and experimented with a number of different mixing techniques and as Table 3 shows we could improve over wsum in our experimental setup. Liu et al. (2009) used a modified version of MERT to tune max-translation decoding weights, while we use a two-step approach using MERT for tuning each component model separately and then using CONDOR to tune component weights on top of them. 6 Conclusion & Future Work In this paper, we presented a new approach for domain adaptation using ensemble decoding. In this approach a number of MT systems are combined at decoding time in order to form an ensemble model. The model combination can be done using various mixture operations. We showed that this approach can gain up to 2.2 BLEU points over its concatenation baseline and 0.39 BLEU points over a powerful mixture model. Future work includes extending this approach to use multiple translation models with multiple language models in ensemble decoding. Different mixture operations can be investigated and the behaviour of each operation can be studied in more details. We will also add capability of supporting syntax-based ensemble decoding and experiment how a phrase-based system can benefit from syntax information present in a syntax-aware MT system. Furthermore, ensemble decoding can be applied on domain mixing settings in which development sets and test sets include sentences from different domains and genres, and this is a very suitable setting for an ensemble model which can adapt to new domains at test time. In addition, we can extend our approach by applying some of the techniques used in other system combination approaches such as consensus decoding, using n-gram features, tuning using forest-based MERT, among other possible extensions. Acknowledgments This research was partially supported by an NSERC, Canada (RGPIN: 264905) grant and a Google Faculty Award to the last author. We would like to thank Philipp Koehn and the anonymous reviewers for their valuable comments. We also thank the developers of GIZA++ and Condor which we used for our experiments. 947 References M. Bacchiani and B. Roark. 2003. Unsupervised language model adaptation. In Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003 IEEE International Conference on, volume 1, pages I–224 – I–227 vol.1, april. Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In Proceedings of the Fourth Workshop on Statistical Machine Translation, StatMT ’09, pages 182–189, Stroudsburg, PA, USA. ACL. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In In Proceedings of the Conference on Empirical Methods in Natural Language Processing. ACL. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL ’05: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 263–270, Morristown, NJ, USA. ACL. Jorge Civera and Alfons Juan. 2007. Domain adaptation in statistical machine translation with mixture modelling. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 177–180, Stroudsburg, PA, USA. ACL. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, HLT ’11, pages 176–181. ACL. P. Clarkson and A. Robinson. 1997. Language model adaptation using mixtures and an exponentially decaying cache. In Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’97)-Volume 2 - Volume 2, ICASSP ’97, pages 799–, Washington, DC, USA. IEEE Computer Society. Hal Daum´e, III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. J. Artif. Int. Res., 26:101–126, May. John DeNero, Shankar Kumar, Ciprian Chelba, and Franz Och. 2010. Model combination for machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 975–983, Stroudsburg, PA, USA. ACL. Matthias Eck, Stephan Vogel, and Alex Waibel. 2004. Language model adaptation for statistical machine translation based on information retrieval. In In Proceedings of LREC. George Foster and Roland Kuhn. 2007. Mixture-model adaptation for smt. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 128–135, Stroudsburg, PA, USA. ACL. George Foster, Cyril Goutte, and Roland Kuhn. 2010. Discriminative instance weighting for domain adaptation in statistical machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 451– 459, Stroudsburg, PA, USA. ACL. Almut Silja Hildebrand and Stephan Vogel. 2009. CMU system combination for WMT’09. In Proceedings of the Fourth Workshop on Statistical Machine Translation, StatMT ’09, pages 47–50, Stroudsburg, PA, USA. ACL. Almut Silja Hildebrand, Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Adaptation of the translation model for statistical machine translation based on information retrieval. In Proceedings of the 10th EAMT 2005, Budapest, Hungary, May. Geoffrey E. Hinton. 1999. Products of experts. In Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on (Conf. Publ. No. 470), volume 1, pages 1–6. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 264–271, Prague, Czech Republic, June. ACL. Philipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 224– 227, Stroudsburg, PA, USA. ACL. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology Conference of the NAACL, pages 127–133, Edmonton, May. NAACL. Yang Liu, Haitao Mi, Yang Feng, and Qun Liu. 2009. Joint decoding with multiple translation models. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2, ACL ’09, pages 576–584, Stroudsburg, PA, USA. ACL. F. J. Och and H. Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the ACL, pages 440–447, Hongkong, China, October. Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of the 41th Annual Meeting of the ACL, Sapporo, July. ACL. 948 Slav Petrov. 2010. Products of random latent variable grammars. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 19–27, Stroudsburg, PA, USA. ACL. Fatiha Sadat, Howard Johnson, Akakpo Agbago, George Foster, Joel Martin, and Aaron Tikuisis. 2005. Portage: A phrase-based machine translation system. In In Proceedings of the ACL Worskhop on Building and Using Parallel Texts, Ann Arbor. ACL. Baskaran Sankaran, Majid Razmara, and Anoop Sarkar. 2012. Kriya an end-to-end hierarchical phrase-based mt system. The Prague Bulletin of Mathematical Linguistics, 97(97), April. Kristie Seymore and Ronald Rosenfeld. 1997. Using story topics for language model adaptation. In George Kokkinakis, Nikos Fakotakis, and Evangelos Dermatas, editors, EUROSPEECH. ISCA. Andrew Smith, Trevor Cohn, and Miles Osborne. 2005. Logarithmic opinion pools for conditional random fields. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 18–25, Stroudsburg, PA, USA. ACL. Andreas Stolcke. 2002. SRILM – an extensible language modeling toolkit. In Proceedings International Conference on Spoken Language Processing, pages 257– 286. Jorg Tiedemann. 2009. News from opus - a collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia. Nicola Ueffing, Gholamreza Haffari, and Anoop Sarkar. 2007. Transductive learning for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 25–32, Prague, Czech Republic, June. ACL. Frank Vanden Berghen and Hugues Bersini. 2005. CONDOR, a new parallel, constrained extension of powell’s UOBYQA algorithm: Experimental results and comparison with the DFO algorithm. Journal of Computational and Applied Mathematics, 181:157–175, September. 949
2012
99
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1–10, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Shift-Reduce Parsing Algorithm for Phrase-based String-to-Dependency Translation Yang Liu State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology Tsinghua University, Beijing 100084, China [email protected] Abstract We introduce a shift-reduce parsing algorithm for phrase-based string-todependency translation. As the algorithm generates dependency trees for partial translations left-to-right in decoding, it allows for efficient integration of both n-gram and dependency language models. To resolve conflicts in shift-reduce parsing, we propose a maximum entropy model trained on the derivation graph of training data. As our approach combines the merits of phrase-based and string-todependency models, it achieves significant improvements over the two baselines on the NIST Chinese-English datasets. 1 Introduction Modern statistical machine translation approaches can be roughly divided into two broad categories: phrase-based and syntax-based. Phrase-based approaches treat phrase, which is usually a sequence of consecutive words, as the basic unit of translation (Koehn et al., 2003; Och and Ney, 2004). As phrases are capable of memorizing local context, phrase-based approaches excel at handling local word selection and reordering. In addition, it is straightforward to integrate n-gram language models into phrase-based decoders in which translation always grows left-to-right. As a result, phrase-based decoders only need to maintain the boundary words on one end to calculate language model probabilities. However, as phrase-based decoding usually casts translation as a string concatenation problem and permits arbitrary permutation, it proves to be NP-complete (Knight, 1999). Syntax-based approaches, on the other hand, model the hierarchical structure of natural languages (Wu, 1997; Yamada and Knight, 2001; Chiang, 2005; Quirk et al., 2005; Galley et al., 2006; Liu et al., 2006; Huang et al., 2006; Shen et al., 2008; Mi and Huang, 2008; Zhang et al., 2008). As syntactic information can be exploited to provide linguistically-motivated reordering rules, predicting non-local permutation is computationally tractable in syntax-based approaches. Unfortunately, as syntax-based decoders often generate target-language words in a bottom-up way using the CKY algorithm, integrating n-gram language models becomes more expensive because they have to maintain target boundary words at both ends of a partial translation (Chiang, 2007; Huang and Chiang, 2007). Moreover, syntax-based approaches often suffer from the rule coverage problem since syntactic constraints rule out a large portion of nonsyntactic phrase pairs, which might help decoders generalize well to unseen data (Marcu et al., 2006). Furthermore, the introduction of nonterminals makes the grammar size significantly bigger than phrase tables and leads to higher memory requirement (Chiang, 2007). As a result, incremental decoding with hierarchical structures has attracted increasing attention in recent years. While some authors try to integrate syntax into phrase-based decoding (Galley and Manning, 2008; Galley and Manning, 2009; Feng et al., 2010), others develop incremental algorithms for syntax-based models (Watanabe et al., 2006; Huang and Mi, 2010; Dyer and Resnik, 2010; Feng et al., 2012). Despite these successful efforts, challenges still remain for both directions. While parsing algorithms can be used to parse partial translations in phrase-based decoding, the search space is significantly enlarged since there are exponentially many parse trees for exponentially many translations. On the other hand, although target words can be generated left-to-right by altering the way of tree transversal in syntaxbased models, it is still difficult to reach full rule coverage as compared with phrase table. 1 zongtong jiang yu siyue lai lundun fangwen The President will visit London in April source phrase target phrase dependency category r1 fangwen visit {} fixed r2 yu siyue in April {1 →2} fixed r3 zongtong jiang The President will {2 →1} floating left r4 yu siyue lai lundun London in April {2 →3} floating right r5 zongtong jiang President will {} ill-formed Figure 1: A training example consisting of a (romanized) Chinese sentence, an English dependency tree, and the word alignment between them. Each translation rule is composed of a source phrase, a target phrase with a set of dependency arcs. Following Shen et al. (2008), we distinguish between fixed, floating, and ill-formed structures. In this paper, we propose a shift-reduce parsing algorithm for phrase-based string-to-dependency translation. The basic unit of translation in our model is string-to-dependency phrase pair, which consists of a phrase on the source side and a dependency structure on the target side. The algorithm generates well-formed dependency structures for partial translations left-to-right using string-todependency phrase pairs. Therefore, our approach is capable of combining the advantages of both phrase-based and syntax-based approaches: 1. compact rule table: our rule table is a subset of the original string-to-dependency grammar (Shen et al., 2008; Shen et al., 2010) by excluding rules with non-terminals. 2. full rule coverage: all phrase pairs, both syntactic and non-syntactic, can be used in our algorithm. This is the same with Moses (Koehn et al., 2007). 3. efficient integration of n-gram language model: as translation grows left-to-right in our algorithm, integrating n-gram language models is straightforward. 4. exploiting syntactic information: as the shift-reduce parsing algorithm generates target language dependency trees in decoding, dependency language models (Shen et al., 2008; Shen et al., 2010) can be used to encourage linguistically-motivated reordering. 5. resolving local parsing ambiguity: as dependency trees for phrases are memorized in rules, our approach avoids resolving local parsing ambiguity and explores in a smaller search space than parsing word-by-word on the fly in decoding (Galley and Manning, 2009). We evaluate our method on the NIST ChineseEnglish translation datasets. Experiments show that our approach significantly outperforms both phrase-based (Koehn et al., 2007) and string-todependency approaches (Shen et al., 2008) in terms of BLEU and TER. 2 Shift-Reduce Parsing for Phrase-based String-to-Dependency Translation Figure 1 shows a training example consisting of a (romanized) Chinese sentence, an English dependency tree, and the word alignment between them. Following Shen et al. (2008), string-todependency rules without non-terminals can be extracted from the training example. As shown in Figure 1, each rule is composed of a source phrase and a target dependency structure. Shen et al. (2008) divide dependency structures into two broad categories: 1. well-formed (a) fixed: the head is known or fixed; 2 0 ◦◦◦◦◦◦◦ 1 S r3 [The President will] • • ◦◦◦◦◦ 2 S r1 [The President will] [visit] • • ◦◦◦◦• 3 Rl [The President will visit] • • ◦◦◦◦• 4 S r4 [The President will visit] [London in April] • • • • • • • 5 Rr [The President will visit London in April] • • • • • • • step action rule stack coverage Figure 2: Shift-reduce parsing with string-to-dependency phrase pairs. For each state, the algorithm maintains a stack to store items (i.e., well-formed dependency structures). At each step, it chooses one action to extend a state: shift (S), reduce left (Rl), or reduce right (Rr). The decoding process terminates when all source words are covered and there is a complete dependency tree in the stack. (b) floating: sibling nodes of a common head, but the head itself is unspecified or floating. Each of the siblings must be a complete constituent. 2. ill-formed: neither fixed nor floating. We further distinguish between left and right floating structures according to the position of head. For example, as “The President will” is the left dependant of its head “visit”, it is a left floating structure. To integrate the advantages of phrase-based and string-to-dependency models, we propose a shift-reduce algorithm for phrase-based string-todependency translation. Figure 2 shows an example. We describe a state (i.e., parser configuration) as a tuple ⟨S, C⟩where S is a stack that stores items and C is a coverage vector that indicates which source words have been translated. Each item s ∈S is a well-formed dependency structure. The algorithm starts with an empty state. At each step, it chooses one of the three actions (Huang et al., 2009) to extend a state: 1. shift (S): move a target dependency structure onto the stack; 2. reduce left (Rl): combine the two items on the stack, st and st−1 (t ≥2), with the root of st as the head and replace them with a combined item; 3. reduce right (Rr): combine the two items on the stack, st and st−1 (t ≥2), with the root of st−1 as the head and replace them with a combined item. The decoding process terminates when all source words are covered and there is a complete dependency tree in the stack. Note that unlike monolingual shift-reduce parsers (Nivre, 2004; Zhang and Clark, 2008; Huang et al., 2009), our algorithm does not maintain a queue for remaining words of the input because the future dependency structure to be shifted is unknown in advance in the translation scenario. Instead, we use a coverage vector on the source side to determine when to terminate the algorithm. For an input sentence of J words, the number of actions is 2K −1, where K is the number of rules used in decoding. 1 There are always K shifts and 1Empirically, we find that the average number of stacks for J words is about 1.5 × J on the Chinese-English data. 3 [The President] [will] [visit] [The President] [will] [visit] [London] [The President] [will] [visit London] [The President] [will visit London] [The President] [will visit] [The President will visit] [The President will visit] [London] [The President will visit London] S Rr Rl Rl Rl Rl S Rr Figure 3: Ambiguity in shift-reduce parsing. st−1 st legal action(s) yes S h yes S l yes S r no h h yes S, Rl, Rr h l yes S h r yes Rr l h yes Rl l l yes S l r no r h no r l no r r no Table 1: Conflicts in shift-reduce parsing. st and st−1 are the top two items in the stack of a state. We use “h” to denote fixed structure, “l” to denote left floating structure, and “r” to denote right floating structure. It is clear that only “h+h” is ambiguous. K −1 reductions. It is easy to verify that the reduce left and reduce right actions are equivalent to the left adjoining and right adjoining operations defined by Shen et al. (2008). They suffice to operate on wellformed structures and produce projective dependency parse trees. Therefore, with dependency structures present in the stacks, it is possible to use dependency language models to encourage linguistically plausible phrase reordering. 3 A Maximum Entropy Based Shift-Reduce Parsing Model Shift-reduce parsing is efficient but suffers from parsing errors caused by syntactic ambiguity. Figure 3 shows two (partial) derivations for a dependency tree. Consider the item on the top, the algorithm can either apply a shift action to move a new item or apply a reduce left action to obtain a bigger structure. This is often referred to as conflict in the shift-reduce dependency parsing literature (Huang et al., 2009). In this work, the shift-reduce parser faces four types of conflicts: 1. shift vs. shift; 2. shift vs. reduce left; 3. shift vs. reduce right; 4. reduce left vs. reduce right. Fortunately, if we distinguish between left and right floating structures, it is possible to rule out most conflicts. Table 1 shows the relationship between conflicts, dependency structures and actions. We use st and st−1 to denote the top two 4 [The President will visit London][in April] DT NNP MD VB NNP IN IN type feature templates Unigram c Wh(st) Wh(st−1) Wlc(st) Wrc(st−1) Th(st) Th(st−1) Tlc(st) Trc(st−1) Bigram Wh(st) ◦Wh(st−1) Th(St) ◦Th(st−1) Wh(st) ◦Th(st) Wh(st−1) ◦Th(st−1) Wh(st) ◦Wrc(st−1) Wh(st−1) ◦Wlc(st) Trigram c ◦Wh(st) ◦W (st−1) c ◦Th(st) ◦Th(st−1) Wh(st) ◦Wh(st−1) ◦Tlc(st) Wh(st) ◦Wh(st−1) ◦Trc(st−1) Th(st) ◦Th(st−1) ◦Tlc(st) Th(st) ◦Th(st−1) ◦Trc(st−1) Figure 4: Feature templates for maximum entropy based shift-reduce parsing model. c is a boolean value that indicate whether all source words are covered (shift is prohibited if true), Wh(·) and Th(·) are functions that get the root word and tag of an item, Wlc(·) and Tlc(·) returns the word and tag of the left most child of the root, Wrc(·) amd Trc(·) returns the word and tag of the right most child of the root. Symbol ◦denotes feature conjunction. In this example, c = true, Wh(st) = in, Th(st) = IN, Wh(st−1) = visit, Wlc(st−1) = London. items in the stack. “h” stands for fixed structure, “l” for left floating structure, and “r” for right floating structure. If the stack is empty, the only applicable action is shift. If there is only one item in the stack and the item is either fixed or left floating, the only applicable action is shift. Note that it is illegal to shift a right floating structure onto an empty stack because it will never be reduced. If the stack contains at least two items, only “h+h” is ambiguous and the others are either unambiguous or illegal. Therefore, we only need to focus on how to resolve conflicts for the “h+h” case (i.e., the top two items in a stack are both fixed structures). We propose a maximum entropy model to resolve the conflicts for “h+h”: 2 Pθ(a|c, st, st−1) = exp(θ · h(a, c, st, st−1)) P a exp(θ · h(a, c, st, st−1)) where a ∈{S, Rl, Rr} is an action, c is a boolean value that indicates whether all source words are covered (shift is prohibited if true), st and st−1 are the top two items on the stack, h(a, c, st, st−1) is a vector of binary features and θ is a vector of feature weights. Figure 4 shows the feature templates used in our experiments. Wh(·) and Th(·) are functions that get the root word and tag of an item, Wlc(·) and Tlc(·) returns the word and tag of the left most child of the root, Wrc(·) and Trc(·) returns the 2The shift-shift conflicts always exist because there are usually multiple rules that can be shifted. This can be revolved using standard features in phrase-based models. word and tag of the right most child of the root. In this example, c = true, Wh(st) = in, Th(st) = IN, Wh(st−1) = visit, Wlc(st−1) = London. To train the model, we need an “oracle” or goldstandard action sequence for each training example. Unfortunately, such oracle turns out to be non-unique even for monolingual shift-reduce dependency parsing (Huang et al., 2009). The situation for phrase-based shift-reduce parsing aggravates because there are usually multiple ways of segmenting sentence into phrases. To alleviate this problem, we introduce a structure called derivation graph to compactly represent all derivations of a training example. Figure 3 shows a (partial) derivation graph, in which a node corresponds to a state and an edge corresponds to an action. The graph begins with an empty state and ends with the given training example. More formally, a derivation graph is a directed acyclic graph G = ⟨V, E⟩where V is a set of nodes and E is a set of edges. Each node v corresponds to a state in the shift-reduce parsing process. There are two distinguished nodes: v0, the staring empty state, and v|V |, the ending completed state. Each edge e = (a, i, j) transits node vi to node vj via an action a ∈{S, Rl, Rr}. To build the derivation graph, our algorithm starts with an empty state and iteratively extends an unprocessed state until reaches the completed state. During the process, states that violate the training example are discarded. Even so, there are still exponentially many states for a training example, especially for long sentences. Fortunately, we 5 Algorithm 1 Beam-search shift-reduce parsing. 1: procedure PARSE(f) 2: V ←∅ 3: ADD(v0, V[0]) 4: k ←0 5: while V[k] ̸= ∅do 6: for all v ∈V[k] do 7: for all a ∈{S, Rl, Rr} do 8: EXTEND(f, v, a, V) 9: end for 10: end for 11: k ←k + 1 12: end while 13: end procedure only need to focus on “h+h” states. In addition, we follow Huang et al. (2009) to use the heuristic of “shortest stack” to always prefer Rl to S. 4 Decoding Our decoder is based on a linear model (Och, 2003) with the following features: 1. relative frequencies in two directions; 2. lexical weights in two directions; 3. phrase penalty; 4. distance-based reordering model; 5. lexicaized reordering model; 6. n-gram language model model; 7. word penalty; 8. ill-formed structure penalty; 9. dependency language model; 10. maximum entropy parsing model. In practice, we extend deterministic shiftreduce parsing with beam search (Zhang and Clark, 2008; Huang et al., 2009). As shown in Algorithm 1, the algorithm maintains a list of stacks V and each stack groups states with the same number of accumulated actions (line 2). The stack list V initializes with an empty state v0 (line 3). Then, the states in the stack are iteratively extended until there are no incomplete states (lines 4-12). The search space is constrained by discarding any state that has a score worse than: 1. β multiplied with the best score in the stack, or 2. the score of b-th best state in the stack. As the stack of a state keeps changing during the decoding process, the context information needed to calculate dependency language model and maximum entropy model probabilities (e.g., root word, leftmost child, etc.) changes dynamically as well. As a result, the chance of risk-free hypothesis recombination (Koehn et al., 2003) significantly decreases because complicated contextual information is much less likely to be identical. Therefore, we use hypergraph reranking (Huang and Chiang, 2007; Huang, 2008), which proves to be effective for integrating non-local features into dynamic programming, to alleviate this problem. The decoding process is divided into two passes. In the first pass, only standard features (i.e., features 1-7 in the list in the beginning of this section) are used to produce a hypergraph. 3 In the second pass, we use the hypergraph reranking algorithm (Huang, 2008) to find promising translations using additional dependency features (i.e., features 8-10 in the list). As hypergraph is capable of storing exponentially many derivations compactly, the negative effect of propagating mistakes made in the first pass to the second pass can be minimized. To improve rule coverage, we follow Shen et al. (2008) to use ill-formed structures in decoding. If an ill-formed structure has a single root, it can treated as a (pseudo) fixed structure; otherwise it is transformed to one (pseudo) left floating structure and one (pseudo) right floating structure. We use a feature to count how many ill-formed structures are used in decoding. 5 Experiments We evaluated our phrase-based string-todependency translation system on ChineseEnglish translation. The training data consists of 2.9M pairs of sentences with 76.0M Chinese words and 82.2M English words. We used the Stanford parser (Klein and Manning, 2003) to get dependency trees for English sentences. We used the SRILM toolkit (Stolcke, 2002) to train a 3Note that the first pass does not work like a phrase-based decoder because it yields dependency trees on the target side. A uniform model (i.e., each action has a fixed probability of 1/3) is used to resolve “h+h” conflicts. 6 MT02 (tune) MT03 MT04 MT05 system BLEU TER BLEU TER BLEU TER BLEU TER phrase 34.88 57.00 33.82 57.19 35.48 56.48 32.52 57.62 dependency 35.23 56.12 34.20 56.36 36.01 55.55 33.06 56.94 this work 35.71∗∗ 55.87∗∗ 34.81∗∗+ 55.94∗∗+ 36.37∗∗ 55.02∗∗+ 33.53∗∗ 56.58∗∗ Table 2: Comparison with Moses (Koehn et al., 2007) and a re-implementation of the bottom-up stringto-dependency decoder (Shen et al., 2008) in terms of uncased BLEU and TER. We use randomization test (Riezler and Maxwell, 2005) to calculate statistical significance. *: significantly better than Moses (p < 0.05), **: significantly better than Moses (p < 0.01), +: significantly better than string-todependency (p < 0.05), ++: significantly better than string-to-dependency (p < 0.01). features BLEU TER standard 34.79 56.93 + depLM 35.29∗ 56.17∗∗ + maxent 35.40∗∗ 56.09∗∗ + depLM & maxent 35.71∗∗ 55.87∗∗ Table 3: Contribution of maximum entropy shiftreduce parsing model. “standard” denotes using standard features of phrase-based system. Adding dependency language model (“depLM”) and the maximum entropy shift-reduce parsing model (“maxent”) significantly improves BLEU and TER on the development set, both separately and jointly. 4-gram language model on the Xinhua portion of the GIGAWORD coprus, which contians 238M English words. A 3-gram dependency language model was trained on the English dependency trees. We used the 2002 NIST MT ChineseEnglish dataset as the development set and the 2003-2005 NIST datasets as the testsets. We evaluated translation quality using uncased BLEU (Papineni et al., 2002) and TER (Snover et al., 2006). The features were optimized with respect to BLEU using the minimum error rate training algorithm (Och, 2003). We chose the following two systems that are closest to our work as baselines: 1. The Moses phrase-based decoder (Koehn et al., 2007). 2. A re-implementation of bottom-up string-todependency decoder (Shen et al., 2008). All the three systems share with the same targetside parsed, word-aligned training data. The histogram pruning parameter b is set to 100 and rules coverage BLEU TER well-formed 44.87 34.42 57.35 all 100.00 35.71∗∗ 55.87∗∗ Table 4: Comparison of well-formed and illformed structures. Using all rules significantly outperforms using only well-formed structures. BLEU and TER scores are calculated on the development set. phrase table limit is set to 20 for all the three systems. Moses shares the same feature set with our system except for the dependency features. For the bottom-up string-to-dependency system, we included both well-formed and ill-formed structures in chart parsing. To control the grammar size, we only extracted “tight” initial phrase pairs (i.e., the boundary words of a phrase must be aligned) as suggested by (Chiang, 2007). For our system, we used the Le Zhang’s maximum entropy modeling toolkit to train the shift-reduce parsing model after extracting 32.6M events from the training data. 4 We set the iteration limit to 100. The accuracy on the training data is 90.18%. Table 2 gives the performance of Moses, the bottom-up string-to-dependency system, and our system in terms of uncased BLEU and TER scores. From the same training data, Moses extracted 103M bilingual phrases, the bottomup string-to-dependency system extracted 587M string-to-dependency rules, and our system extracted 124M phrase-based dependency rules. We find that our approach outperforms both baselines systematically on all testsets. We use randomization test (Riezler and Maxwell, 2005) to calculate statistical significance. As our system can take full advantage of lexicalized reordering and depen4http://homepages.inf.ed.ac.uk/lzhang10/maxent.html 7 30.50 31.00 31.50 32.00 32.50 33.00 33.50 34.00 34.50 0 2 4 6 8 10 12 BLEU distortion limit this work Moses Figure 5: Performance of Moses and our system with various distortion limits. dency language models without loss in rule coverage, it achieves significantly better results than Moses on all test sets. The gains in TER are much larger than BLEU because dependency language models do not model n-grams directly. Compared with the bottom-up string-to-dependency system, our system outperforms consistently but not significantly in all cases. The average decoding time for Moses is 3.67 seconds per sentence, bottomup string-to-dependency is 13.89 seconds, and our system is 4.56 seconds. Table 3 shows the effect of hypergraph reranking. In the first pass, our decoder uses standard phrase-based features to build a hypergraph. The BLEU score is slightly lower than Moses with the same configuration. One possible reason is that our decoder organizes stacks with respect to actions, whereas Moses groups partial translations with the same number of covered source words in stacks. In the second pass, our decoder reranks the hypergraph with additional dependency features. We find that adding dependency language and maximum entropy shift-reduce models consistently brings significant improvements, both separately and jointly. We analyzed translation rules extracted from the training data. Among them, well-formed structures account for 43.58% (fixed 33.21%, floating left 9.01%, and floating right 1.36%) and illformed structures 56.42%. As shown in Table 4, using all rules clearly outperforms using only well-formed structures. Figure 5 shows the performance of Moses and our system with various distortion limits on the development set. Our system consistently outperforms Moses in all cases, suggesting that adding dependency helps improve phrase reordering. 6 Related Work The work of Galley and Manning (2009) is closest in spirit to ours. They introduce maximum spanning tree (MST) parsing (McDonald et al., 2005) into phrase-based translation. The system is phrase-based except that an MST parser runs to parse partial translations at the same time. One challenge is that MST parsing itself is not incremental, making it expensive to identify loops during hypothesis expansion. On the contrary, shiftreduce parsing is naturally incremental and can be seamlessly integrated into left-to-right phrasebased decoding. More importantly, in our work dependency trees are memorized for phrases rather than being generated word by word on the fly in decoding. This treatment might not only reduce decoding complexity but also potentially revolve local parsing ambiguity. Our decoding algorithm is similar to Gimpel and Smith (2011)’s lattice parsing algorithm as we divide decoding into two steps: hypergraph generation and hypergraph rescoring. The major difference is that our hypergraph is not a phrasal lattice because each phrase pair is associated with a dependency structure on the target side. In other words, our second pass is to find the Viterbi derivation with addition features rather than parsing the phrasal lattice. In addition, their algorithm produces phrasal dependency parse trees while the leaves of our dependency trees are words, making dependency language models can be directly used. Shift-reduce parsing has been successfully used in phrase-based decoding but limited to adding structural constraints. Galley and Manning (2008) propose a shift-reduce algorithm to integrate a hierarchical reordering model into phrase-based systems. Feng et al. (2010) use shift-reduce parsing to impose ITG (Wu, 1997) constraints on phrase permutation. Our work differs from theirs by going further to incorporate linguistic syntax into phrase-based decoding. Along another line, a number of authors have developed incremental algorithms for syntaxbased models (Watanabe et al., 2006; Huang and Mi, 2010; Dyer and Resnik, 2010; Feng et al., 2012). Watanabe et al. (2006) introduce an Earlystyle top-down parser based on binary-branching Greibach Normal Form. Huang et al. (2010), Dyer 8 and Resnik (2010), and Feng et al. (2012) use dotted rules to change the tree transversal to generate target words left-to-right, either top-down or bottom-up. 7 Conclusion We have presented a shift-reduce parsing algorithm for phrase-based string-to-dependency translation. The algorithm generates dependency structures incrementally using string-todependency phrase pairs. Therefore, our approach is capable of combining the advantages of both phrase-based and string-to-dependency models, it outperforms the two baselines on Chineseto-English translation. In the future, we plan to include more contextual information (e.g., the uncovered source phrases) in the maximum entropy model to resolve conflicts. Another direction is to adapt the dynamic programming algorithm proposed by Huang and Sagae (2010) to improve our string-todependency decoder. It is also interesting to compare with applying word-based shift-reduce parsing to phrase-based decoding similar to (Galley and Manning, 2009). Acknowledgments This research is supported by the 863 Program under the grant No 2012AA011102 and No. 2011AA01A207, by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office, and by a Research Fund No. 20123000007 from Tsinghua MOE-Microsoft Joint Laboratory. References David Chiang. 2005. A hiearchical phrase-based model for statistical machine translation. In Proc. of ACL 2005. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Chris Dyer and Philip Resnik. 2010. Context-free reordering, finite-state translation. In Proc. of NAACL 2010. Yang Feng, Haitao Mi, Yang Liu, and Qun Liu. 2010. An efficient shift-reduce decoding algorithm for phrased-based machine translation. In Proc. of COLING 2010. Yang Feng, Yang Liu, Qun Liu, and Trevor Cohn. 2012. Left-to-right tree-to-string decoding with prediction. In Proc. of EMNLP 2012. Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proc. of EMNLP 2008. Michel Galley and Christopher D. Manning. 2009. Quadratic-time dependency parsing for machine translation. In Proc. of ACL 2009. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of ACL 2006. Kevin Gimpel and Noah A. Smith. 2011. Quasisynchronous phrase dependency grammars for machine translation. In Proc. of EMNLP 2011. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL 2007. Liang Huang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proc. of EMNLP 2010. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proc. of ACL 2010. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. of AMTA 2006. Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proc. of EMNLP 2009. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proc. of ACL 2008. Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proc. of ACL 2003. Kevin Knight. 1999. Decoding complexity in wordreplacement translation models. Computational Linguistics. Philipp Koehn, Franz Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL 2003. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL 2007. 9 Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proc. of ACL 2006. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. Spmt: Statistical machine translation with syntactified target language phrases. In Proc. of EMNLP 2006. R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proc. of EMNLP 2005. Haitao Mi and Liang Huang. 2008. Forest-based translation. In Proc. of ACL 2008. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proc. of ACL 2004 Workshop Incremental Parsing: Bringning Engineering and Cognition Together. Franz Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4). Franz Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL 2003. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL 2002. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal smt. In Proc. of ACL 2005. S. Riezler and J. Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing for mt. In Proc. of ACL 2005 Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. of ACL 2008. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2010. String-to-dependency statistical machine translation. Computational Linguistics, 36(4). Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. of AMTA 2006. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Proc. of ICSLP 2002. Taro Watanabe, Hajime Tsukuda, and Hideki Isozaki. 2006. Left-to-right target generation for hierarchical phrase-based translation. In Proc. of ACL 2006. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proc. of ACL 2001. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam search. In Proc. of EMNLP 2008. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008. A tree sequence alignment-based tree-to-tree translation model. In Proc. of ACL 2008. 10
2013
1
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 93–103, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Graph-based Local Coherence Modeling Camille Guinaudeau and Michael Strube Heidelberg Institute for Theoretical Studies gGmbH Schloss-Wolfsbrunnenweg 35 69118 Heidelberg, Germany (camille.guinaudeau|michael.strube)@h-its.org Abstract We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems. 1 Introduction Many NLP applications which process or generate texts rely on information about local coherence, i.e. information about which entities occur in which sentence and how the entities are distributed in the text. This led to the development of many theories and models accounting for local coherence. One popular model, the centering model (Grosz et al., 1995), uses a ranking of discourse entities realized in particular sentences and computes transitions between adjacent sentences to provide insight in the felicity of texts. Centering models local coherence rather generally and has been applied to the generation of referring expressions (Kibble and Power, 2004), to resolve pronouns (Brennan et al., 1987, inter alia), to score essays (Miltsakaki and Kukich, 2004), to arrange sentences in the correct order (Karamanis et al., 2009), and to many other tasks. Poesio et al. (2004) observe that it is not clear how to set parameters in the centering model so that optimal performance in different tasks and languages can be achieved. Barzilay and Lapata (2008) criticize research on centering to be too dependent on manually annotated input. This led them to propose a local coherence model relying on a more parsimonious representation, the entity grid model. The entity grid is a two dimensional array where the rows represent sentences and the columns discourse entities. From this grid Barzilay and Lapata (2008) derive probabilities of transitions between adjacent sentences which are used as features for machine learning algorithms. They evaluate this approach successfully on sentence ordering, summary coherence rating, and readability assessment. However, their approach has some disadvantages which they point out themselves: data sparsity, domain dependence and computational complexity, especially in terms of feature space issues while building their model (Barzilay and Lapata (2008, p.8, p.10, p.30), Elsner and Charniak (2011, p.126, p.127)). In order to overcome these problems we propose to represent entities in a graph and then model local coherence by applying centrality measures to the nodes in the graph (Section 3). We claim that a graph is a more powerful representation for local coherence than the entity grid (Barzilay and Lapata, 2008) which is restricted to transitions between adjacent sentences. The graph can easily span the entire text without leading to computational complexity and data sparsity problems. Similar to the application of graph-based methods in other areas of NLP (e.g. work on word sense disambiguation by Navigli and Lapata (2010); for an overview over graph-based methods in NLP see Mihalcea and Radev (2011)) we model local coherence by relying only on centrality measures applied to the nodes in the graph. We apply our graph-based model to the three tasks handled by Barzilay and Lapata (2008) to show that it provides the same flexibility over disparate tasks as the entity grid model: sentence ordering (Section 4.1), summary coherence ranking (Section 4.2), and readability assessment (Section 4.3). In the 93 The Turkish government fell after mob-tie allegations. Turkey’s constitution mandates a secular republic despite its Muslim majority. Military and secular leaders pressured President Demirel to keep the Islamic-oriented Virtue Party on the fringe. Business leaders feared Virtue would alienate the EU. Table 1: Excerpt of a manual summary M from DUC2003 experiments sections, we discuss the impact of genre and stylistic properties of documents on the local coherence computation. We also show that, though we do not need a computationally expensive learning phase, our model achieves state-ofthe-art performance. From this we conclude that a graph is an alternative to the entity grid model: it is computationally more tractable for modeling local coherence and does not suffer from data sparsity problems (Section 5). 2 The Entity Grid Model Barzilay and Lapata (2005; 2008) introduced the entity grid, a method for local coherence modeling that captures the distribution of discourse entities across sentences in a text. An entity grid is a two dimensional array, where rows correspond to sentences and columns to discourse entities. For each discourse entity ej and each sentence si in the text, the corresponding grid cell cij contains information about the presence or absence of the entity in the sentence. If the entity does not appear in the sentence, the corresponding grid cell contains an absence marker “−”. If the entity is present in the sentence, the cell contains a representation of the entity’s syntactic role: “S” if the entity is a subject, “O” if it is an object and “X” for all other syntactic roles (cf. Table 2). When a noun is attested more than once with a different grammatical role in the same sentence, the role with the highest grammatical ranking is chosen to represent the entity (a subject is ranked higher than an object, which is ranked higher than other syntactic roles). Barzilay and Lapata (2008) capture local coherence by means of local entity transitions, i.e. sequences of grid cells (c1j . . . cij . . . cnj) representing the syntactic function or absence of an entity in adjacent sentences1. The coherence of a sentence in relation to its local context is determined by the 1For complexity reasons, Barzilay and Lapata consider only transitions between at most three sentences. GOVERNMENT ALLEGATION TURKEY CONSTITUTION SECULAR REPUBLIC MAJORITY MILITARY LEADER PRESIDENT DEMIREL VIRTUE PARTY FRINGE BUSINESS EU s1 S X −−−−−−−−−−−−−− s2 −−X S X O X −−−−−−−−− s3 −−−−X −−X S X S X O X −− s4 −−−−−−−−S −−S −−X O Table 2: Entity Grid representation of summary M local entity transitions of the entities present or absent in the sentence. To make this representation accessible to machine learning algorithms, Barzilay and Lapata (2008) compute for each document the probability of each transition and generate feature vectors representing the sentences. Coherence assessment is then formulated as a ranking learning problem where the ranking function is learned with SVMlight (Joachims, 2002). The entity grid approach has already been applied to many applications relying on local coherence estimation: summary rating (Barzilay and Lapata, 2005), essay scoring (Burstein et al., 2010) or story generation (McIntyre and Lapata, 2010). It was also used successfully in combination with other systems or features. Soricut and Marcu (2006) show that the entity grid model is a critical component in their sentence ordering model for discourse generation. Barzilay and Lapata (2008) combine the entity grid with readability-related features to discriminate documents between easy- and difficult-to-read categories. Lin et al. (2011) use discourse relations to transform the entity grid representation into a discourse role matrix that is used to generate feature vectors for machine learning algorithms similarly to Barzilay and Lapata (2008). Several studies propose to extend the entity grid model using different strategies for entity selection. Filippova and Strube (2007) aim to improve the entity grid model performance by grouping entities by means of semantic relatedness. In their studies, Elsner and Charniak extend the number and type of entities selected and consider that each entity has to be dealt with accordingly with its information status (Elsner et al., 2007) or its namedentity category (Elsner and Charniak, 2011). Finally, they include a heuristic coreference resolution component by linking mentions which share a 94 s1 s2 s3 s4 e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13 e14 e15 e16 3 1 1 3 1 2 1 1 1 3 1 3 1 2 1 3 3 1 2 s1 s2 s3 s4 1 1 s1 s2 s3 s4 1 2 (a) Bipartite Graph (b) Unweighted One-mode (c) Weighted One-mode Projection Projection e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13 e14 e15 e16 s1 3 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 s2 0 0 1 3 1 2 1 0 0 0 0 0 0 0 0 0 s3 0 0 0 0 1 0 0 1 3 1 3 1 2 1 0 0 s4 0 0 0 0 0 0 0 0 3 0 0 3 0 0 1 2 s1 s2 s3 s4 s1 0 0 0 0 s2 0 0 1 0 s3 0 0 0 1 s4 0 0 0 0 s1 s2 s3 s4 s1 0 0 0 0 s2 0 0 1 0 s3 0 0 0 2 s4 0 0 0 0 (d) Incidence Matrix (e) Unweighted Adjacency (f) Weighted Adjacency Matrix Matrix Figure 1: Bipartite graph for summary M from Table 1, one-mode projections and associated incidence and adjacency matrices. Weights in Figure 1(a) are assigned as follows: “S” = 3, “O” = 2, “X” = 1, “−” = 0 (no edge). head noun. These extensions led to the best results reported so far for the sentence ordering task. 3 Method Our model is based on the insight that the entity grid (Barzilay and Lapata, 2008) corresponds to the incidence matrix of a bipartite graph representing the text (see Newman (2010) for more details on graph representation). A fundamental assumption underlying our model is that this bipartite graph contains the entity transition information needed for local coherence computation, rendering feature vectors and learning phase unnecessary. The bipartite graph G = (Vs, Ve, L, w) is defined by two independent sets of nodes – that correspond to the set of sentences Vs and the set of entities Ve of the text – and a set of edges L associated with weights w. An edge between a sentence node si and an entity node ej is created in the bipartite graph if the corresponding cell cij in the entity grid is not equal to “−”. Each edge is associated with a weight w(ej, si) that depends on the grammatical role of the entity ej in the sentence si2. In contrast to Barzilay and Lapata’s entity grid that contains information about absent entities, our graph-based representation only contains “positive” information. Figure 1(a) shows an example of the bipartite graph that corresponds to the grid in Table 2. The incidence matrix of this graph (Figure 1(d)) is very similar to the entity grid. 2The assignment of weights is described in Section 4. By modeling entity transitions, Barzilay and Lapata rely on links that exist between sentences to model local coherence. In the same spirit, we apply different kinds of one-mode projections to the sentence node set Vs of the bipartite graph to represent the connections that exist between – potentially non adjacent – sentences in the graph. These projections result in graphs where nodes correspond to sentences. An edge is created between two nodes if the corresponding sentences have a least one entity in common. Contrary to the bipartite graph, one-mode projections are directed as they follow the text order. Therefore, in projection graphs an edge can exist between the first and the second sentence while the inverse is not possible. In our model, we define three kinds of projection graphs, PU, PW and PAcc, depending on the weighting scheme associated with their edges. In PU, weights are binary and equal 1 when two sentences have a least one entity in common (Figure 1(b)). In PW , edges are weighted according to the number of entities “shared” by two sentences (Figure 1(c)). In PAcc syntactic information is accounted for by integrating the edge weights in the bipartite graph. In this case, weights are equal to Wik = X e∈Eik w(e, si) · w(e, sk) , where Eik is the set of entities shared by si and sk. Distance between sentences si and sk can also be integrated in the weight of one-mode projections to decrease the importance of links that ex95 ists between non adjacent sentences. In this case, the weights of the projection graphs are divided by k −i. From this graph-based representation, the local coherence of a text T can be measured by computing the average outdegree of a projection graph P. This centrality measure was chosen for two main reasons. First, it allows us to evaluate to which extent a sentence is connected, in terms of discourse entities, with the other sentences of the text. Second, compared to other centrality measures, the computational complexity of the average outdegree is low (O( N∗(N−1) 2 ) for a document composed by N sentences), keeping the local coherence estimation feasible on large documents and on large corpora. Formally, the local coherence of a text T is equal to LocalCoherence(T) = AvgOutDegree(P) = 1 N X i=1..N OutDegree(si) , where OutDegree(si) is the sum of the weights associated to edges that leave si and N is the number of sentences in the text. This value can also be seen as the sum of the values of the adjacency matrix of the projection graph (Figures 1(e) and 1(f)) divided by the number of sentences. 4 Experiments We compare our model with the entity grid approach and evaluate the influence of the different weighting schemes used in the projection graphs, either PW or PAcc, where weights are potentially decreased by distance information Dist. Our baseline corresponds to local coherence computation based on the unweighted projection graph PU. For graph construction, all nouns in a document are considered as discourse entities, even those which do not head NPs as this is beneficial for the entity grid model as described in Elsner and Charniak (2011). We also propose to use a coreference resolution system and consider coreferent entities to be the same discourse entity. To do so, we use one of the top performing systems from the CoNLL 2012 shared task (Martschat et al., 2012). As the coreference resolution system is trained on well-formed textual documents and expects a correct sentence ordering, we use in all our experiments only features that do not rely on sentence order (e.g. alias relations, string matching, etc.). Grammatical information associated with each entity is extracted automatically thanks to the Stanford parser using dependency conversion (de Marneffe et al., 2006). Syntactic weights in the bipartite graph are defined following the linguistic intuition that subjects are more important than objects, which are themselves more important than other syntactic roles. Preliminary experiments show that as long as weight assignment follows the scheme S > O > X, then more coherent documents are associated with a higher local coherence value than less coherent document in 90% of cases (while this value equals 49% when no restriction is given on syntactic weights order). Moreover, as the local coherence computation is a linear combination of the syntactic weights, the function is smooth and no large variations of the local coherence values are observed for small changes of weights’ values. For these reasons, weights w(e, si) are set as follows: 3 if e is subject in si, 2 if e is an object and 1 otherwise. We evaluate the ability of our graph-based model to estimate the local coherence of a textual document with three different experiments. First, we perfom a sentence ordering task (Section 4.1) as proposed in Barzilay and Lapata (2008). Then, as the first task uses “artificial” documents, we also work on two other tasks that involve “real” documents: summary coherence rating (Section 4.2), and readability assessment (Section 4.3). In these experiments, distance computation and syntactic weights are the same for all tasks and all corpora. However, the model is also flexible and can be adaptated to the different tasks by optimizing the parameters on a development data set, which may give better results. 4.1 Sentence Ordering The first experiment consists in ranking alternative sentence orderings of a document, as proposed by Barzilay and Lapata (2008) and Elsner and Charniak (2011). 4.1.1 Experimental Settings The sentence ordering task can be performed in two ways: discrimination and insertion. Discrimination consists in comparing a document to a random permutation of its sentences. For this, our system associates local coherence values with the original document and its permutation, the output of our system being considered as correct if the score for the original document is higher than the 96 score of its permutation. In the insertion task, proposed by Elsner and Charniak (2011), we evaluate the ability of our system to retrieve the original position of a sentence previously removed from a document. For this, each sentence is removed in turn and a local coherence score is computed for every possible reinsertion position. The system output is considered as correct if the document associated with the highest local coherence score is the one in which the sentence is reinserted in the correct position. These two tasks were performed on documents extracted from the English test part of the CoNLL 2012 shared task (Pradhan et al., 2012). This corpus, composed by documents of multiple news sources – spoken or written – was preferred to the ACCIDENTS and EARTHQUAKES corpora used by Barzilay and Lapata (2008) for two reasons. First, as mentioned by Elsner and Charniak (2008), these corpora use a very constrained style and are not typical of normal informative documents3. Second, we want to evaluate the influence of automatically performed coreference resolution in a controlled fashion. The coreference resolution system used performs well on the CoNLL 2012 data. In this dataset, documents composed by the concatenation of differents news articles or too short to have at least 20 permutations were discarded from the corpus. This filtering results in 61 documents composed of 36.1 sentences or 2064 word tokens on average. In both discrimination and insertion, we compare our system against a random baseline where random values are associated with the different orderings. 4.1.2 Discrimination Accuracy is used to evaluate the ability of our system to discriminate a document from 20 different permutations. It equals the number of times our system gives the highest score to the original document, divided by the number of comparisons. Since the model can give the same score for a permutation and the original document, we also compute F-measure where recall is correct/total and precision equals correct/decisions. We test significance using the Student’s t-test that can detect significant differences between paired samples. Moreover, as increasing the number of hypotheses 3Our graph-based model obtains for the discrimination task an accuracy of 0.846 and 0.635 on the ACCIDENTS and EARTHQUAKES datasets, respectively, compared to 0.904 and 0.872 as reported by Barzilay and Lapata (2008). Acc F Acc F Random 0.496 0.496 B&L 0.877 0.877 E&C 0.915 0.915 wo coref w coref PU, Dist 0.830 0.830 0.833 0.833 PW , Dist 0.871 0.871 0.849 0.849 PAcc, Dist 0.889 0.889 0.852 0.852 Table 3: Discrimination, reproduced baselines (B&L: Barzilay and Lapata (2008); E&C Elsner and Charniak (2011)) vs. graph-based in a test can also increase the likelihood of witnessing a rare event, and therefore, the chance to reject the null hypothesis when it is true, we use the Bonferroni correction to adjust the increased random likelihood of apparent significance. Table 3 presents the values obtained by three baseline systems when applied to our corpus. Results for the entity grid models described by Barzilay and Lapata (2008) and Elsner and Charniak (2011) are obtained by using Micha Elsner’s reimplementation in the Brown Coherence Toolkit4. The system was trained on the English training part of the CoNLL 2012 shared task filtered in the same way as the test part. Table 3 also displays the results for our model. These values show that our system performs comparable to the state-of-the-art. Indeed, the difference between our best results and those of Elsner and Charniak are not statistically significant. In this experiment, distance information is critical. Without it, it is not possible to distinguish between an original document and one of its permutation as both contain the same number and kind of entities. Distance however can detect changes in the distribution of entities within the document as space between entities is significantly modified when sentence order is permuted. When the number of entities “shared” by two sentences is taken into account (PW ), the accuracy of our system grows (from 0.830 to 0.871). Table 3 finally shows that syntactic information improves the performance of our system (yet not significantly) and gives the best results (PAcc). We also evaluated the influence of coreference resolution on the performance of our system. Us4https://bitbucket.org/melsner/ browncoherence; B&L is Elsner’s “baseline entity grid” (command line option ’-n’), E&C is Elsner’s “extended entity grid” (’-f’) 97 Acc. Ins. Acc. Ins. Random 0.028 0.071 E&C 0.068 0.167 wo coref w coref PU, Dist 0.062 0.101 0.068 0.120 PW , Dist 0.075 0.114 0.070 0.138 PAcc, Dist 0.071 0.102 0.067 0.097 Table 4: Insertion, reproduced baselines vs. graphbased ing coreference resolution improves the performance of the system when distance information is used alone in the system (Table 3). However, this improvement is not statistically significant. 4.1.3 Insertion Sentence insertion is much more difficult than discrimination for two reasons. First, in insertion, permutations only differ by one sentence. Second, a document is compared to many more permutations in insertion task than in discrimination. In complement to accuracy, we use the insertion score introduced by Elsner and Charniak (2011) for evaluation. This score – the higher, the better – computes the proximity between the initial and the proposed position of a sentence, averaged by the number of sentences. Table 4 shows that, as expected, results for this task are much lower than those obtained for discrimination. However they are still comparable with the results of Elsner and Charniak (2011)5. As previously and for the same reasons, distance information is critical for this task. The best results, that present a statistically significant improvement when compared to the random baseline, are obtained when distance information and the number of entities “shared” by two sentences are taken into account (PW ). We can see that the accuracy value obtained with our system is higher than the one provided with the entity grid model. However, the entity grid model reaches a significantly higher insertion score. This means that, if it makes more mistakes than our system, the position chosen by the entity grid model is usually closer to the correct position. Finally, contrary to the discrimination task, syntactic information (PAcc) does not improve the performance of our system. 5Their results are slightly lower than those presented in their paper, probably because our corpus is composed by documents that can be longer than the ones used in their experiments (Wall Street Journal articles). When the coreference resolution system is used, the best accuracy value decreases while the insertion score increases from 0.114 to 0.138 (Table 4). Therefore, coreference resolution tends to associate positions that are closer to the original ones. 4.2 Summary Coherence Rating To reconfirm the hypothesis that our model can estimate the local coherence of a textual document, we perform a second experiment, summary coherence rating. To this end, we apply our model on the corpus used and proposed by Barzilay and Lapata (2008). As the objective of our model is to estimate the coherence of a summary, we prefer this dataset to other summarization evaluation task corpora, as these account for other dimensions of the summaries: content selection, fluency, etc. Starting with a pair of summaries, one slightly more coherent than the other, the objective of the task is to order the two summaries according to local coherence. 4.2.1 Experimental Settings For the summary coherence rating experiment, pairs to be ordered are composed of summaries extracted from the Document Understanding Conference (DUC 2003). Summaries, provided either by humans or by automatic systems, were judged by seven humans annotators and associated with a coherence score (for more details on this score see Barzilay and Lapata (2008)). 80 pairs were then created, each of these being composed by two summaries of a same document where the score of one of the summaries is significantly higher than the score of the second one. Even though all summaries are of approximately the same length (114.2 words on average), their sentence length can vary considerably. Indeed, more coherent summaries tend to have more sentences and contain less entities. For evaluation purposes, the accuracy still corresponds to the number of correct ratings divided by the number of comparisons, while the Fmeasure combines recall and precision measures. As before, significance is tested with the Student’s t-test accounting for the Bonferroni correction. 4.2.2 Results Table 5 compares the results reported by Barzilay and Lapata (2008) on the exact same corpus with the results obtained with our system. It shows that 98 Acc. F Acc. F B&L 0.833 wo coref w coref PU 0.800 0.815 0.700 0.718 PW 0.613 0.613 0.538 0.548 PAcc 0.700 0.704 0.638 0.638 PU, Dist 0.650 0.658 0.550 0.557 PW , Dist 0.525 0.525 0.513 0.513 PAcc, Dist 0.700 0.700 0.588 0.588 Table 5: Summary Coherence Rating, reported results from Barzilay and Lapata (2008) vs. graphbased our system gives results comparable to those obtained by Barzilay and Lapata (2008). This table also shows that, contrary to sentence ordering task, accounting for the distance between two sentences (Dist) tends to decrease the results. This difference is explained by the fact that a manual summary, usually considered as more coherent by humans annotators, tends to contain more (and shorter) sentences than an automatic one. As adding distance information decreases the value of our local coherence score, our graph-based model gives better results without it. Moreover, in contrast to the first experiment, when accounting for the number of entities “shared” by two sentences (PW ), values of accuracy and F-measure are lower. We explain this behaviour by the number of sentences contained in the less coherent documents. Indeed, they are composed by a smaller number of sentences but contain more entities on average. This means that, in these documents, two sentences tend to share a larger number of entities and therefore have a higher local coherence score when the PW projection graph is used. When combined with distance information, syntactic information still improves the results (PAcc), though not significantly, but does not lead to the best results for this task. Finally, Table 5 also shows that using a coreference resolution system for document representation does not improve the performance of our system. We believe that, as mentioned by Barzilay and Lapata (2008), this degradation is related to the fact that automatic summarization systems do not use anaphoric expressions which makes the coreference resolution system useless in this case. With our graph-based model, the best results are obtained by the baseline (PU), and experiments show that adding information about distance or syntax does not help in this context. It seems therefore necessary to integrate information that is more appropriate to summaries. Although making the model more appropriate for a specific task is out of the scope of this paper, our model is flexible and accounting for information about genre differences or sentence length, by adding weights in the graph-based representation of the document, is feasible without any modification of the model. 4.3 Readability Assessment Barzilay and Lapata (2008) argue that grid models are domain and style dependent. Therefore they proposed a readability assessment task to test if the entity grid model can be used for style classification. They combined their model with Schwarm and Ostendorf’s (2005) readability features and use Support Vector Machines to classify documents in two categories. With the same intention, we evaluate the ability of our model to differentiate “easy to read” documents from difficult ones. 4.3.1 Experimental Settings The objective of the readability assessment task is to evaluate how difficult to read a document is. We perform this task on the data used by Barzilay and Lapata (2008), a corpus collected originally by Barzilay and Elhadad (2003) from the Encyclopedia Britannica and its version for children, the Britannica Elementary. Both versions contain 107 articles. In Encyclopedia Britannica, documents are composed by an average of 83.1 sentences while they contain 36.6 sentences in Britannica Elementary. Although these texts are not explicitly annotated with grade levels, they represent two broad readability categories. In order to estimate the complexity of a document, our model computes the local coherence score for each article in the two categories. The article associated with the higher score is considered to be the more readable as it is more coherent, needing less interpretation from the reader than a document associated with a lower local coherence score. Values presented in the following section correspond to accuracy, where the system is correct if it assigns the higher local coherence score to the most “easy to read” document, and F-measure. 99 Acc. F Acc. F S&O 0.786 B&L 0.509 B&L + S&O 0.888 wo coref w coref PU 0.589 0.589 0.374 0.374 PW 0.579 0.579 0.383 0.383 PAcc 0.645 0.645 0.421 0.421 PU, Dist 0.589 0.589 0.280 0.280 PW , Dist 0.570 0.570 0.290 0.290 PAcc, Dist 0.766 0.766 0.308 0.308 Table 6: Readability, reported results from Barzilay and Lapata (2008) vs. graph-based (S&O: Schwarm and Ostendorf (2005)) 4.3.2 Results In order to compare our results with those reported by Barzilay and Lapata (2008), entities used for the graph-based representation are discourse entities that head NPs. Table 6 shows that, for this task, syntactic information plays a dominant role (PAcc). A statistically significant improvement is provided by including syntactic information. It gives more weight to subject entities that are more numerous in the Britannica Elementary documents which are composed by simpler and shorter sentences. Finally, when distance is accounted for together with syntactic information, the accuracy is significantly improved (p < 0.01) with regard to the results obtained with syntactic information only. Table 6 also shows that when the number of entities “shared” by two sentences is accounted for (PW ), the results are lower. Indeed, Encyclopedia Britannica documents are composed by longer sentences, that contain a higher number of entities. This increases the local coherence value of difficult documents more than the value of “easy to read” documents, that contain less entities. When our graph-based representation used the coreference resolution system, unlike the observation of Barzilay and Lapata (2008), the results of our model decrease significantly. The poor performance of our system in this case can be explained by the fact that the coreference resolution system regroups more entities in Encyclopedia Britannica documents than in Britannica Elementary ones. Therefore, the number of entities that are “shared” by two sentences increases more importantly in the Encyclopedia Britannica corpus, while the distance between two occurrences of one entity decreases in a more significant manner. For these reasons, the coherence scores associated with “difficult to read” documents tend to be higher when coreference resolution is performed on our data, which reduces the performance of our system. As before, syntactic information leads to the best results, but does not allow the accuracy to be higher than random anymore. Compared to the results provided by Barzilay and Lapata (2008) with the entity grid model alone, our representation outperforms their model significantly. We believe that this difference is caused by how syntactic information is introduced in the document representation and by the fact that our system can deal with entities that appear throughout the whole document while the entity grid model only looks at entities within a three sentences windows. Our model which captures exclusively local coherence is almost on par with the results reported for Schwarm & Ostendorf’s (2005) system which relies on a wide range of lexical, syntactic and semantic features. Only when Barzilay and Lapata (2008) combine the entity grid with Schwarm & Ostendorf’s features they reach performance considerably better than ours. In addition to the experiments proposed by Barzilay and Lapata (2008), we used a third readability category, the Britannica Student, that contains articles targeted for youths (from 11 to 14 years old). These documents, which are quite similar to the Encyclopedia Britannica ones, are composed by an average of 44.1 sentences. As we were only able to find 99 articles out of the 107 original ones in this category, sub corpora of the three categories were used for the comparison with the Britannica Student articles. Table 7 shows the results obtained for the comparisons between the two first categories and the Britannica Student articles. As previously, coreference resolution tends to lower the results, therefore only values obtained without coreference resolution are reported in the table. When articles from Britannica Student are compared to articles extracted from Encyclopedia Britannica, Table 7 shows that the different parameters have the same influence as for comparing between Encyclopedia Britannica and Britannica Elementary: statistically significant improvement with syntactic information, higher values when distance is taken into account, etc. However, it 100 Brit. vs. Stud. Stud. vs. Elem. Acc. F Acc. F PU 0.444 0.444 0.667 0.667 PW 0.434 0.434 0.636 0.636 PAcc 0.465 0.465 0.707 0.707 PU, Dist 0.475 0.475 0.646 0.646 PW , Dist 0.485 0.485 0.616 0.616 PAcc, Dist 0.556 0.556 0.657 0.657 Table 7: Readability, comparison between Encyclopedia Britannica, Britannica Elementary and Britannica Student can also be seen that accuracy and F-measure are lower for comparing these two corpora. This is probably due to the stylistic difference between these two kinds of articles, which is less significant than the difference between articles from Encyclopedia Britannica and Britannica Elementary. Concerning the comparison between Britannica Student and Britannica Elementary articles, Table 7 shows that integrating distance information gives slightly different results and tends to decrease the values of accuracy and F-measure. This is explained by the fact that Britannica Elementary documents contain fewer entities than Britannica Student articles. As the length of the two kinds of articles is similar, distance between entities in Britannica Elementary documents is more important. As a result, accounting for distance information lowers the local coherence values for the more coherent document, which reduces the performance of our model. As previously, syntactic information improves the results and, for this comparison, the best result is obtained when syntactic information alone is accounted for. This leads to an accuracy which is almost equal to the one when comparing Encyclopedia Britannica and Britannica Elementary (0.707 against 0.766). These two additional experiments show that our model is style dependent. It obtains better results when it has to distinguish between Encyclopedia Britannica and Britannica Elementary or Britannica Student and Britannica Elementary articles which present a more important difference from a stylictic point of view than articles from Encyclopedia Britannica and Britannica Elementary. 5 Conclusions In this paper, we proposed an unsupervised and computationally efficient graph-based local coherence model. Experiments show that our model is robust among tasks and domains, and reaches reasonable results for three tasks with the same parameter values and settings (i.e. accuracy values of 0.889, 0.70 and 0.766 for sentence ordering, summary coherence rating and readability assessment tasks respectively (PAcc, Dist)). Moreover, our model can be optimized and obtains results comparable with entity grid based methods when proper settings are used for each task. Our model has two main advantages over the entity grid model. First, as the graph used for document representation contains information about entity transitions, our model does not need a learning phase. Second, as it relies only on graph centrality, our model does not suffer from the computational complexity and data sparsity problems mentioned by Barzilay and Lapata (2008). Our current model leaves space for improvement. Future work should first investigate the integration of information about entities. Indeed, our model only uses entities as indications of sentence connection although it has been shown that distinguishing important from unimportant entities, according to their named-entity category, has a positive impact on local coherence computation (Elsner and Charniak, 2011). Moreover, future work should also examine the use of discourse relation information, as proposed in (Lin et al., 2011). This can be easily done by adding edges in the projection graphs when sentences contain entities related from a discourse point of view while Lin et al.’s approach suffers from complexity and data sparsity problems similar to the entity grid model. Finally, these promising results on local coherence modeling make us believe that our graphbased representation can be used without much modification for other tasks, e.g. extractive summarization or topic segmentation. This could be achieved with link analysis algorithms such as PageRank, that decide on the importance of a (sentence) node within a graph based on global information recursively drawn from the entire graph. Acknowledgments. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS postdoctoral scholarship. We would like to thank Mirella Lapata and Regina Barzilay for making their data available and Micha Elsner for providing his toolkit. 101 References Regina Barzilay and Noemie Elhadad. 2003. Sentence alignment for monolingual comparable corpora. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan, 11–12 July 2003, pages 25–32. Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, Ann Arbor, Mich., 25–30 June 2005, pages 141–148. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1–34. Susan E. Brennan, Marilyn W. Friedman, and Carl J. Pollard. 1987. A centering approach to pronouns. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, Stanford, Cal., 6–9 July 1987, pages 155–162. Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in student essays. In Proceedings of Human Language Technologies 2010: The Conference of the North American Chapter of the Association for Computational Linguistics, Los Angeles, Cal., 2–4 June 2010, pages 681–684. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation, Genoa, Italy, 22–28 May 2006, pages 449–454. Micha Elsner and Eugene Charniak. 2008. Coreference-inspired coherence modeling. In Proceedings ACL-HLT 2008 Conference Short Papers, Columbus, Ohio, 15–20 June 2008, pages 41–44. Micha Elsner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Proceedings of the ACL 2011 Conference Short Papers, Portland, Oreg., 19–24 June 2011, pages 125–129. Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, Rochester, N.Y., 22– 27 April 2007, pages 436–443. Read this version: http://www.cs.brown.edu/ melsner/order.pdf. Katja Filippova and Michael Strube. 2007. Extending the entity-grid coherence model to semantically related entities. In Proceedings of the 11th European Workshop on Natural Language Generation, Schloss Dagstuhl, Germany, 17–20 June 2007, pages 139–142. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the 8th International Conference on Knowledge Discovery and Data Mining, Edmonton, Alberta, Canada, 23– 26 July 2002, pages 133–142. Nikiforos Karamanis, Chris Mellish, Massimo Poesio, and Jon Oberlander. 2009. Evaluating centering for information ordering using corpora. Computational Linguistics, 35(1):29–46. Rodger Kibble and Richard Power. 2004. Optimizing referential coherence in text generation. Computational Linguistics, 30(4):401–416. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, Portland, Oreg., 19–24 June 2011, pages 997–1006. Sebastian Martschat, Jie Cai, Samuel Broscheit, ´Eva M´ujdricza-Maydt, and Michael Strube. 2012. A multigraph model for coreference resolution. In Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning, Jeju Island, Korea, 12–14 July 2012, pages 100–106. Neil McIntyre and Mirella Lapata. 2010. Plot induction and evolutionary search for story generation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Sweden, 11–16 July 2010, pages 1562–1572. Rada Mihalcea and Dragomir Radev. 2011. Graphbased Natural Language Processing and Information Retrieval. Cambridge Univ. Press, Cambridge, U.K. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Roberto Navigli and Mirella Lapata. 2010. An experimental study of graph connectivity for unsupervised word sense disambiguation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(4):678–692. Mark E.J. Newman. 2010. Networks: An Introduction. Oxford University Press, New York, N.Y. Massimo Poesio, Rosemary Stevenson, Barbara Di Eugenio, and Janet Hitzeman. 2004. Centering: A parametric theory and its instantiations. Computational Linguistics, 30(3). 309-363. Sameer Pradhan, Alessandro Moschitti, and Nianwen Xue. 2012. CoNLL-2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes. 102 In Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning, Jeju Island, Korea, 12–14 July 2012, pages 1– 40. Sarah E. Schwarm and Mari Ostendorf. 2005. Reading level assessment using support vector machines and statistical language models. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, Ann Arbor, Mich., 25–30 June 2005, pages 523–530. Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Australia, 17–21 July 2006, pages 1105– 1112. 103
2013
10
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1014–1022, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Summarization Through Submodularity and Dispersion Anirban Dasgupta Yahoo! Labs Sunnyvale, CA 95054 [email protected] Ravi Kumar Google Mountain View, CA 94043 [email protected] Sujith Ravi Google Mountain View, CA 94043 [email protected] Abstract We propose a new optimization framework for summarization by generalizing the submodular framework of (Lin and Bilmes, 2011). In our framework the summarization desideratum is expressed as a sum of a submodular function and a nonsubmodular function, which we call dispersion; the latter uses inter-sentence dissimilarities in different ways in order to ensure non-redundancy of the summary. We consider three natural dispersion functions and show that a greedy algorithm can obtain an approximately optimal summary in all three cases. We conduct experiments on two corpora—DUC 2004 and user comments on news articles—and show that the performance of our algorithm outperforms those that rely only on submodularity. 1 Introduction Summarization is a classic text processing problem. Broadly speaking, given one or more documents, the goal is to obtain a concise piece of text that contains the most salient points in the given document(s). Thanks to the omnipresent information overload facing all of us, the importance of summarization is gaining; semiautomatically summarized content is increasingly becoming user-facing: many newspapers equip editors with automated tools to aid them in choosing a subset of user comments to show. Summarization has been studied for the past in various settings—a large single document, multiple documents on the same topic, and user-generated content. Each domain throws up its own set of idiosyncrasies and challenges for the summarization task. On one hand, in the multi-document case (say, different news reports on the same event), the text is often very long and detailed. The precision/recall requirements are higher in this domain and a semantic representation of the text might be needed to avoid redundancy. On the other hand, in the case of user-generated content (say, comments on a news article), even though the text is short, one is faced with a different set of problems: volume (popular articles generate more than 10,000 comments), noise (most comments are vacuous, linguistically deficient, and tangential to the article), and redundancy (similar views are expressed by multiple commenters). In both cases, there is a delicate balance between choosing the salient, relevant, popular, and diverse points (e.g., sentences) versus minimizing syntactic and semantic redundancy. While there have been many approaches to automatic summarization (see Section 2), our work is directly inspired by the recent elegant framework of (Lin and Bilmes, 2011). They employed the powerful theory of submodular functions for summarization: submodularity embodies the “diminishing returns” property and hence is a natural vocabulary to express the summarization desiderata. In this framework, each of the constraints (relevance, redundancy, etc.) is captured as a submodular function and the objective is to maximize their sum. A simple greedy algorithm is guaranteed to produce an approximately optimal summary. They used this framework to obtain the best results on the DUC 2004 corpus. Even though the submodularity framework is quite general, it has limitations in its expressivity. In particular, it cannot capture redundancy constraints that depend on pairwise dissimilarities between sentences. For example, a natural constraint on the summary is that the sum or the minimum of pairwise dissimilarities between sentences chosen in the summary should be maximized; this, unfortunately, is not a submodular function. We call functions that depend on inter-sentence pair1014 wise dissimilarities in the summary as dispersion functions. Our focus in this work is on significantly furthering the submodularity-based summarization framework to incorporate such dispersion functions. We propose a very general graph-based summarization framework that combines a submodular function with a non-submodular dispersion function. We consider three natural dispersion functions on the sentences in a summary: sum of all-pair sentence dissimilarities, the weight of the minimum spanning tree on the sentences, and the minimum of all-pair sentence dissimilarities. These three functions represent three different ways of using the sentence dissimilarities. We then show that a greedy algorithm can obtain approximately optimal summary in each of the three cases; the proof exploits some nice combinatorial properties satisfied by the three dispersion functions. We then conduct experiments on two corpora: the DUC 2004 corpus and a corpus of user comments on news articles. On DUC 2004, we obtain performance that matches (Lin and Bilmes, 2011), without any serious parameter tuning; note that their framework does not have the dispersion function. On the comment corpus, we outperform their method, demonstrating that value of dispersion functions. As part of our methodology, we also use a new structured representation for summaries. 2 Related Work Automatic summarization is a well-studied problem in the literature. Several methods have been proposed for single- and multi-document summarization (Carbonell and Goldstein, 1998; Conroy and O’Leary, 2001; Takamura and Okumura, 2009; Shen and Li, 2010). Related concepts have also been used in several other scenarios such as query-focused summarization in information retrieval (Daum´e and Marcu, 2006), microblog summarization (Sharifiet al., 2010), event summarization (Filatova, 2004), and others (Riedhammer et al., 2010; Qazvinian et al., 2010; Yatani et al., 2011). Graph-based methods have been used for summarization (Ganesan et al., 2010), but in a different context—using paths in graphs to produce very short abstractive summaries. For a detailed survey on existing automatic summarization techniques and other related topics, see (Kim et al., 2011; Nenkova and McKeown, 2012). 3 Framework In this section we present the summarization framework. We start by describing a generic objective function that can be widely applied to several summarization scenarios. This objective function is the sum of a monotone submodular coverage function and a non-submodular dispersion function. We then describe a simple greedy algorithm for optimizing this objective function with provable approximation guarantees for three natural dispersion functions. 3.1 Preliminaries Let C be a collection of texts. Depending on the summarization application, C can refer to the set of documents (e.g., newswire) related to a particular topic as in standard summarization; in other scenarios (e.g., user-generated content), it is a collection of comments associated with a news article or a blog post, etc. For each document c ∈C, let S(c) denote the set of sentences in c. Let U = ∪c∈CS(c) be the universe of all sentences; without loss of generality, we assume each sentence is unique to a document. For a sentence u ∈U, let C(u) be the document corresponding to u. Each u ∈U is associated with a weight w(u), which might indicate, for instance, how similar u is to the main article (and/or the query, in querydependent settings). Each pair u, v ∈U is associated with a similarity s(u, v) ∈[0, 1]. This similarity can then be used to define an intersentence distance d(·, ·) as follows: let d′(u, v) = 1 −s(u, v) and define d(u, v) to be the shortest path distance from u to v in the graph where the weight of each edge (u, v) is d′(u, v). Note that d(·, ·) is a metric unlike d′(·, ·), which may not be a metric. (In addition to being intuitive, d(·, ·) being a metric helps us obtain guarantees on the algorithm’s output.) For a set S, and a point u ̸∈S, define d(u, S) = minv∈S d(u, v). Let k > 0 be fixed. A summary of U is a subset S ⊆U, |S| = k. Our aim is to find a summary that maximizes f(S) = g(S) + δh(S), (1) where g(S) is the coverage function that is nonnegative, monotone, and submodular1, h(S) is a 1A function f : U →ℜis submodular if for every 1015 dispersion function, and δ ≥0 is a parameter that can be used to scale the range of h(·) to be comparable to that of g(·). For two sets S and T, let P be the set of unordered pairs {u, v} where u ∈S and v ∈T. Our focus is on the following dispersion functions: the sum measure hs(S, T) = P {u,v}∈P d(u, v), the spanning tree measure ht(S, T) given by the cost of the minimum spanning tree of the set S∪T, and the min measure hm(S, T) = min{u,v}∈P d(u, v). Note that these functions span from considering the entire set of distances in S to considering only the minimum distance in S; also it is easy to construct examples to show that none of these functions is submodular. Define h⋆(u, S) = h⋆({u}, S) and h⋆(S) = h⋆(S, S). Let O be the optimal solution of the function f. A summary ˜S is a γ-approximation if f( ˜S) ≥ γf(O). 3.2 Algorithm Maximizing (1) is NP-hard even if δ = 0 or if g(·) = 0 (Chandra and Halld´orsson, 2001). For the special case δ = 0, since g(·) is submodular, a classical greedy algorithm obtains a (1 −1/e)approximation (Nemhauser et al., 1978). But if δ > 0, since the dispersion function h(·) is not submodular, the combined objective f(·) is not submodular as well. Despite this, we show that a simple greedy algorithm achieves a provable approximation factor for (1). This is possible due to some nice structural properties of the dispersion functions we consider. Algorithm 2 Greedy algorithm, parametrized by the dispersion function h; here, U, k, g, δ are fixed. S0 ←∅; i ←0 for i = 0, . . . , k −1 do v ←arg maxu∈U\Si g(Si+u)+δh(Si+u) Si+1 ←Si ∪{v} end for 3.3 Analysis In this section we obtain a provable approximation for the greedy algorithm. First, we show that a greedy choice is well-behaved with respect to the dispersion function h·(·). Lemma 1. Let O be any set with |O| = k. If S is such that |S| = ℓ< k, then (i) P u∈O\S hs(u, S) ≥|O \ S| ℓhs(O) k(k−1); A, B ⊆U, we have f(A)+f(B) ≥f(A∪B)+f(A∩B). (ii) P u∈O\S d(u, S) ≥1 2ht(O) −ht(S); and (iii) there exists u ∈O \ S such that hm(u, S) ≥ hm(O)/2. Proof. The proof for (i) follows directly from Lemma 1 in (Borodin et al., 2012). To prove (ii) let T be the tree obtained by adding all points of O \ S directly to their respective closest points on the minimum spanning tree of S. T is a spanning tree, and hence a Steiner tree, for the points in set S ∪O. Hence, cost(T) = ht(S) + P u∈O\S d(u, S). Let smt(S) denote the cost of a minimum Steiner tree of S. Thus, cost(T) ≥ smt(O ∪S). Since a Steiner tree of O ∪S is also a Steiner tree of O, smt(O ∪S) ≥smt(O). Since this is a metric space, smt(O) ≥1 2ht(O) (see, for example, (Cieslik, 2001)). Thus, ht(S) + X u∈O\S d(u, S) ≥1 2ht(O) ⇒ X u∈O\S d(u, S) ≥1 2ht(O) −ht(S). To prove (iii), let O = {u1, . . . , uk}. By definition, for every i ̸= j, d(ui, uj) ≥hm(O). Consider the (open) ball Bi of radius hm(O)/2 around each element ui. By construction for each i, Bi ∩O = {ui} and for each pair i ̸= j, Bi ∩Bj = ∅. Since |S| < k, and there are k balls Bi, there exists k−ℓballs Bi such that S∩Bi = ∅, proving (iii). We next show that the tree created by the greedy algorithm for h = ht is not far from the optimum. Lemma 2. Let u1, . . . , uk be a sequence of points and let Si = {uj, j ≤i}. Then, ht(Sk) ≥ 1/log k P 2≤j≤k d(uj, Sj−1). Proof. The proof follows by noting that we get a spanning tree by connecting each ui to its closest point in Si−1. The cost of this spanning tree is P 2≤j≤k d(uj, Sj−1) and this tree is also the result of the greedy algorithm run in an online fashion on the input sequence {u1, . . . , uk}. Using the result of (Imase and Waxman, 1991), the competitive ratio of this algorithm is log k, and hence the proof. We now state and prove the main result about the quality of approximation of the greedy algorithm. 1016 Theorem 3. For k > 1, there is a polynomial-time algorithm that obtains a γ-approximation to f(S), where γ = 1/2 for h = hs, γ = 1/4 for h = hm, and γ = 1/3 log k for h = ht. Proof. For hs and ht, we run Algorithm 1 using a new dispersion function h′, which is a slightly modified version of h. In particular, for h = hs, we use h′(S) = 2hs(S). For h = ht, we abuse notation and define h′ to be a function over an ordered set S = {u1, . . . , uk} as follows: h′(S) = P j≤|S| d(uj, Sj−1), where Sj−1 = {u1, . . . , uj−1}. Let f′(S) = g(S) + δh′(S). Consider the ith iteration of the algorithm. By the submodularity of g(·), X u∈O\Si g(Si ∪{u}) −g(Si) (2) ≥ g(O ∪Si) −g(Si) ≥g(O) −g(Sk), where we use monotonicity of g(·) to infer g(O ∪ Si) ≥g(O) and g(Si) ≤g(Sk). For h = hs, the proof follows by Lemma 1(i) and by Theorem 1 in (Borodin et al., 2012). For ht, using the above argument of submodularity and monotonicity of g, and the result from Lemma 1(ii), we have X u∈O\Si g(Si ∪u) −g(Si) + δd(u, Si) ≥ g(O) −g(Si) + δ(ht(O)/2 −ht(Si)) ≥ (g(O) + δht(O)/2) −(g(Si) + δht(Si)) ≥ f(O)/2 −(g(Si) + δht(Si)). Also, ht(Si) ≤2 smt(Si) since this is a metric space. Using the monotonicity of the Steiner tree cost, smt(Si) ≤smt(Sk) ≤ht(Sk). Hence, ht(Si) ≤2ht(Sk). Thus, X u∈O\Si g(Si ∪u) −g(Si) + δd(u, Si) ≥ f(O)/2 −(g(Si) + δht(Si)) ≥ f(O)/2 −(g(Sk) + 2δht(Sk)) ≥ f(O)/2 −2f(Sk). (3) By the greedy choice of ui+1, f′(Si ∪ui+1) −f′(Si) = g(Si ∪ui+1) −g(Si) + δd(ui+1, Si) ≥ (f(O)/2 −2f(Sk))/|O \ Si| ≥ 1 k(f(O)/2 −2f(Sk)). Summing over all i ∈[1, k −1], f′(Sk) ≥(k−1)/k(f(O)/2 −2f(Sk)). (4) Using Lemma 2 we obtain f(Sk) = g(Sk) + δht(Sk) ≥f′(Sk) log k ≥ 1 −1/k log k (f(O)/2 −2f(Sk)). By simplifying, we obtain f(Sk) ≥f(O)/3 log k. Finally for hm, we run Algorithm 1 twice: once with g as given and h ≡0, and the second time with g ≡0 and h ≡hm. Let Sg and Sh be the solutions in the two cases. Let Og and Oh be the corresponding optimal solutions. By the submodularity and monotonicity of g(·), g(Sg) ≥(1 −1/e)g(Og) ≥g(Og)/2. Similarly, using Lemma 1(iii), hm(Sh) ≥ hm(Oh)/2 since in any iteration i < k we can choose an element ui+1 such that hm(ui+1, Si) ≥ hm(Oh)/2. Let S = arg maxX∈{Sg,Sh} f(X). Using an averaging argument, since g and hm are both nonnegative, f(X) ≥(f(Sg)+f(Sh))/2 ≥(g(Og)+δhm(Oh))/4. Since by definition g(Og) ≥g(O) and hm(Oh) ≥ hm(O), we have a 1/4-approximation. 3.4 A universal constant-factor approximation Using the above algorithm that we used for hm, it is possible to give a universal algorithm that gives a constant-factor approximation to each of the above objectives. By running the Algorithm 1 once for g ≡0 and next for h ≡0 and taking the best of the two solutions, we can argue that the resulting set gives a constant factor approximation to f. We do not use this algorithm in our experiments, as it is oblivious of the actual dispersion functions used. 4 Using the Framework Next, we describe how the framework described in Section 3 can be applied to our tasks of interest, i.e., summarizing documents or user-generated content (in our case, comments). First, we represent the elements of interest (i.e., sentences within comments) in a structured manner by using dependency trees. We then use this representation to 1017 generate a graph and instantiate our summarization objective function with specific components that capture the desiderata of a given summarization task. 4.1 Structured representation for sentences In order to instantiate the summarization graph (nodes and edges), we first need to model each sentence (in multi-document summarization) or comment (i.e., set of sentences) as nodes in the graph. Sentences have been typically modeled using standard ngrams (unigrams or bigrams) in previous summarization work. Instead, we model sentences using a structured representation, i.e., its syntax structure using dependency parse trees. We first use a dependency parser (de Marneffe et al., 2006) to parse each sentence and extract the set of dependency relations associated with the sentence. For example, the sentence “I adore tennis” is represented by the dependency relations (nsubj: adore, I) and (dobj: adore, tennis). Each sentence represents a single node u in the graph (unless otherwise specified) and is comprised of a set of dependency relations (or ngrams) present in the sentence. Furthermore, the edge weights s(u, v) represent pairwise similarity between sentences or comments (e.g., similarity between views expressed in different comments). The edge weights are then used to define the inter-sentence distance metric d(u, v) for the different dispersion functions. We identify similar views/opinions by computing semantic similarity rather than using standard similarity measures (such as cosine similarity based on exact lexical matches between different nodes in the graph). For each pair of nodes (u, v) in the graph, we compute the semantic similarity score (using WordNet) between every pair of dependency relation (rel: a, b) in u and v as: s(u, v) = X reli∈u,relj∈v reli=relj WN(ai, aj) × WN(bi, bj), where rel is a relation type (e.g., nsubj) and a, b are the two arguments present in the dependency relation (b does not exist for some relations). WN(wi, wj) is defined as the WordNet similarity score between words wi and wj.2 The edge weights are then normalized across all edges in the 2There exists various semantic relatedness measures based on WordNet (Patwardhan and Pedersen, 2006). In our experiments, for WN we pick one that is based on the path length between the two words in the WordNet graph. graph. This allows us to perform approximate matching of syntactic treelets obtained from the dependency parses using semantic (WordNet) similarity. For example, the sentences “I adore tennis” and “Everyone likes tennis” convey the same view and should be assigned a higher similarity score as opposed to “I hate tennis”. Using the syntactic structure along with semantic similarity helps us identify useful (valid) nuggets of information within comments (or documents), avoid redundancies, and identify similar views in a semantic space. 4.2 Components of the coverage function Our coverage function is a linear combination of the following. (i) Popularity. One of the requirements for a good summary (especially, for user-generated content) is that it should include (or rather not miss) the popular views or opinions expressed by several users across multiple documents or comments. We model this property in our objective function as follows. For each node u, we define w(u) as the number of documents |Curel ⊆C| from the collection such that at least one of the dependency relations rel ∈u appeared in a sentence within some document c ∈Curel. The popularity scores are then normalized across all nodes in the graph. We then add this component to our objective function as w(S) = P u∈S w(u). (ii) Cluster contribution. This term captures the fact that we do not intend to include multiple sentences from the same comment (or document). Define B to be the clustering induced by the sentence to comment relation, i.e., two sentences in the same comment belong to the same cluster. The corresponding contribution to the objective function is P B∈B |S ∩B|1/2. (iii) Content contribution. This term promotes the diversification of content. We look at the graph of sentences where the weight of each edge is s(u, v). This graph is then partitioned based on a local random walk based method to give us clusters D = {D1, . . . , Dn}. The corresponding contribution to the objective function is P D∈D |S ∩D|1/2. (iv) Cover contribution. We also measure the cover of the set S as follows: for each element s in U first define cover of an element u by a set S′ as cov(u, S′) = P v∈S′ s(u, v). Then, the 1018 cover value of the set S is defined as cov(S) = P u∈S min(cov(u, S), 0.25cov(u, U)).3 Thus, the final coverage function is: g(S) = w(S) + α P B∈B |S ∩B|1/2 + β P D∈D |S ∩ D|1/2 + λcov(S), where α, β, λ are non-negative constants. By using the monotone submodularity of each of the component functions, and the fact that addition preserves submodularity, the following is immediate. Fact 4. g(S) is a monotone, non-negative, submodular function. We then apply Algorithm 1 to optimize (1). 5 Experiments 5.1 Data Multi-document summarization. We use the DUC 2004 corpus4 that comprises 50 clusters (i.e., 50 different summarization tasks) with 10 documents per cluster on average. Each document contains multiple sentences and the goal is to produce a summary of all the documents for a given cluster. Comments summarization. We extracted a set of news articles and corresponding user comments from Yahoo! News website. Our corpus contains a set of 34 articles and each article is associated with anywhere from 100–500 comments. Each comment contains more than three sentences and 36 words per sentence on average. 5.2 Evaluation For each summarization task, we compare the system output (i.e., summaries automatically produced by the algorithm) against the humangenerated summaries and evaluate the performance in terms of ROUGE score (Lin, 2004), a standard recall-based evaluation measure used in summarization. A system that produces higher ROUGE scores generates better quality summary and vice versa. We use the following evaluation settings in our experiments for each summarization task: (1) For multi-document summarization, we compute the ROUGE-15 scores that was the main evaluation criterion for DUC 2004 evaluations. 3The choice of the value 0.25 in the cover component is inspired by the observations made by (Lin and Bilmes, 2011) for the α value used in their cover function. 4http://duc.nist.gov/duc2004/tasks.html 5ROUGE v1.5.5 with options: -a -c 95 -b 665 -m -n 4 -w 1.2 (2) For comment summarization, the collection of user comments associated with a given article is typically much larger. Additionally, individual comments are noisy, wordy, diverse, and informally written. Hence for this task, we use a slightly different evaluation criterion that is inspired from the DUC 2005-2007 summarization evaluation tasks. We represent the content within each comment c (i.e., all sentences S(c) comprising the comment) as a single node in the graph. We then run our summarization algorithm on the instantiated graph to produce a summary for each news article. In addition, each news article and corresponding set of comments were presented to three human annotators. They were asked to select a subset of comments (at most 20 comments) that best represented a summary capturing the most popular as well as diverse set of views and opinions expressed by different users that are relevant to the given news article. We then compare the automatically generated comment summaries against the human-generated summaries and compute the ROUGE-1 and ROUGE-2 scores.6 This summarization task is particularly hard for even human annotators since user-generated comments are typically noisy and there are several hundreds of comments per article. Similar to existing work in the literature (Sekine and Nobata, 2003), we computed inter-annotator agreement for the humans by comparing their summaries against each other on a small held-out set of articles. The average ROUGE-1 F-scores observed for humans was much higher (59.7) than that of automatic systems measured against the human-generated summaries (our best system achieved a score of 28.9 ROUGE-1 on the same dataset). This shows that even though this is a new type of summarization task, humans tend to generate more consistent summaries and hence their annotations are reliable for evaluation purposes as in multi-document summarization. 5.3 Results Multi-document summarization. (1) Table 1 compares the performance of our system with the previous best reported system that participated in the DUC 2004 competition. We also include for comparison another baseline—a version 6ROUGE v1.5.5 with options: -a -n 2 -x -m -2 4 -u -c 95 -r 1000 -f A -p 0.5 -t 0 -d -l 150 1019 of our system that approximates the submodular objective function proposed by (Lin and Bilmes, 2011).7 As shown in the results, our best system8 which uses the hs dispersion function achieves a better ROUGE-1 F-score than all other systems. (2) We observe that the hm and ht dispersion functions produce slightly lower scores than hs, which may be a characteristic of this particular summarization task. We believe that the empirical results achieved by different dispersion functions depend on the nature of the summarization tasks and there are task settings under which hm or ht perform better than hs. For example, we show later how using the ht dispersion function yields the best performance on the comments summarization task. Regardless, the theoretical guarantees presented in this paper cover all these cases. (3) We also analyze the contributions of individual components of the new objective function towards summarization performance by selectively setting certain parameters to 0. Table 2 illustrates these results. We clearly see that each component (popularity, cluster contribution, dispersion) individually yields a reasonable summarization performance but the best result is achieved by the combined system (row 5 in the table). We also contrast the performance of the full system with and without the dispersion component (row 4 versus row 5). The results show that optimizing for dispersion yields an improvement in summarization performance. (4) To understand the effect of utilizing syntactic structure and semantic similarity for constructing the summarization graph, we ran the experiments using just the unigrams and bigrams; we obtained a ROUGE-1 F-score of 37.1. Thus, modeling the syntactic structure (using relations extracted 7Note that Lin & Bilmes (2011) report a slightly higher ROUGE-1 score (F-score 38.90) on DUC 2004. This is because their system was tuned for the particular summarization task using the DUC 2003 corpus. On the other hand, even without any parameter tuning our method yields good performance, as evidenced by results on the two different summarization tasks. However, since individual components within our objective function are parametrized it is easy to tune them for a specific task or genre. 8For the full system, we weight certain parameters pertaining to cluster contributions and dispersion higher (α = β = δ = 5) compared to the rest of the objective function (λ = 1). Lin & Bilmes (2011) also observed a similar finding (albeit via parameter tuning) where weighting the cluster contribution component higher yielded better performance. If the maximum number of sentences/comments chosen were k, we brought both hs and ht to the same approximate scale as hm by dividing hs by k(k −1)/2 and ht by k −1. from dependency parse tree) along with computing similarity in semantic spaces (using WordNet) clearly produces an improvement in the summarization quality (+1.4 improvement in ROUGE-1 F-score). However, while the structured representation is beneficial, we observed that dispersion (and other individual components) contribute similar performance gains even when using ngrams alone. So the improvements obtained from the structured representation and dispersion are complementary. System ROUGE-1 F Best system in DUC 2004 37.9 (Lin and Bilmes, 2011), no tuning 37.47 Our algorithm with h = hm 37.5 h = hs 38.5 h = ht 36.8 Table 1: Performance on DUC 2004. Comments summarization. (1) Table 3 compares the performance of our system against a baseline system that is constructed by picking comments in order of decreasing length, i.e., we first pick the longest comment (comprising the most number of characters), then the next longest comment and so on, to create an ordered set of comments. The intuition behind this baseline is that longer comments contain more content and possibly cover more topics than short ones. From the table, we observe that the new system (using either dispersion function) outperforms the baseline by a huge margin (+44% relative improvement in ROUGE-1 and much bigger improvements in ROUGE-2 scores). One reason behind the lower ROUGE-2 scores for the baseline might be that while long comments provide more content (in terms of size), they also add noise and irrelevant information to the generated summaries. Our system models sentences using the syntactic structure and semantics and jointly optimizes for multiple summarization criteria (including dispersion) which helps weed out the noise and identify relevant, useful information within the comments thereby producing better quality summaries. The 95% confidence interval scores for the best system on this task is [36.5–46.9]. (2) Unlike the multi-document summarization, here we observe that the ht dispersion function yields the best empirical performance for this task. This observation supports our claim that the choice of the specific dispersion function depends 1020 Objective function components ROUGE-1 F α = β = λ = δ = 0 35.7 w(S) = β = λ = δ = 0 35.1 h = hs, w(S) = α = β = λ = 0 37.1 δ = 0 37.4 w(S), α, β, λ, δ > 0 38.5 Table 2: Performance with different parameters (DUC). on the summarization task and that the dispersion functions proposed in this paper have a wider variety of use cases. (3) Results showing contributions from individual components of the new summarization objective function are listed in Table 4. We observe a similar pattern as with multi-document summarization. The full system using all components outperform all other parameter settings, achieving the best ROUGE-1 and ROUGE-2 scores. The table also shows that incorporating dispersion into the objective function yields an improvement in summarization quality (row 4 versus row 5). System ROUGE-1 ROUGE-2 Baseline (decreasing length) 28.9 2.9 Our algorithm with h = hm 39.2 13.2 h = hs 40.9 15.0 h = ht 41.6 16.2 Table 3: Performance on comments summarization. Objective function ROUGE-1 ROUGE-2 components α = β = λ = δ = 0 36.1 9.4 w(S) = β = λ = δ = 0 32.1 4.9 h = ht, w(S) = α = β = λ = 0 37.8 11.2 δ = 0 38.0 11.6 w(S), α, β, λ, δ > 0 41.6 16.2 Table 4: Performance with different parameters (comments). 6 Conclusions We introduced a new general-purpose graph-based summarization framework that combines a submodular coverage function with a non-submodular dispersion function. We presented three natural dispersion functions that represent three different ways of ensuring non-redundancy (using sentence dissimilarities) for summarization and proved that a simple greedy algorithm can obtain an approximately optimal summary in all these cases. Experiments on two different summarization tasks show that our algorithm outperforms algorithms that rely only on submodularity. Finally, we demonstrated that using a structured representation to model sentences in the graph improves summarization quality. For future work, it would be interesting to investigate other related developments in this area and perhaps combine them with our approach to see if further improvements are possible. Firstly, it would interesting to see if dispersion offers similar improvements over a tuned version of the submodular framework of Lin and Bilmes (2011). In a very recent work, Lin and Bilmes (2012) demonstrate a further improvement in performance for document summarization by using mixtures of submodular shells. This is an interesting extension of their previous submodular framework and while the new formulation permits more complex functions, the resulting function is still submodular and hence can be combined with the dispersion measures proposed in this paper. A different body of work uses determinantal point processes (DPP) to model subset selection problems and adapt it for document summarization (Kulesza and Taskar, 2011). Note that DPPs use similarity kernels for performing inference whereas our measures are combinatorial and not kernel-representable. While approximation guarantees for DPPs are open, it would be interesting to investigate the empirical gains by combining DPPs with dispersion-like functions. Acknowledgments We thank the anonymous reviewers for their many useful comments. References Allan Borodin, Hyun Chul Lee, and Yuli Ye. 2012. Max-sum diversification, monotone submodular functions and dynamic updates. In Proc. PODS, pages 155–166. Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proc. SIGIR, pages 335–336. Barun Chandra and Magn´us Halld´orsson. 2001. Facility dispersion and remote subgraphs. J. Algorithms, 38(2):438–465. Dietmar Cieslik. 2001. The Steiner Ratio. Springer. 1021 John M. Conroy and Dianne P. O’Leary. 2001. Text summarization via hidden Markov models. In Proc. SIGIR, pages 406–407. Hal Daum´e, III and Daniel Marcu. 2006. Bayesian query-focused summarization. In Proc. COLING/ACL, pages 305–312. Marie-Catherine de Marneffe, Bill Maccartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc. LREC, pages 449–454. Elena Filatova. 2004. Event-based extractive summarization. In Proc. ACL Workshop on Summarization, pages 104–111. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proc. COLING. Makoto Imase and Bernard M. Waxman. 1991. Dynamic Steiner tree problem. SIAM J. Discrete Mathematics, 4(3):369–384. Hyun Duk Kim, Kavita Ganesan, Parikshit Sondhi, and ChengXiang Zhai. 2011. Comprehensive review of opinion summarization. Technical report, University of Illinois at Urbana-Champaign. Alex Kulesza and Ben Taskar. 2011. Learning determinantal point processes. In Proc. UAI, pages 419– 427. Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proc. ACL, pages 510–520. Hui Lin and Jeff Bilmes. 2012. Learning mixtures of submodular shells with application to document summarization. In Proc. UAI, pages 479–490. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out: Proc. ACL Workshop, pages 74–81. G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. 1978. An analysis of approximations for maximizing submodular set functions I. Mathematical Programming, 14(1):265–294. Ani Nenkova and Kathleen McKeown. 2012. A survey of text summarization techniques. In Charu C. Aggarwal and ChengXiang Zhai, editors, Mining Text Data, pages 43–76. Springer. Siddharth Patwardhan and Ted Pedersen. 2006. Using WordNet-based context vectors to estimate the semantic relatedness of concepts. In Proc. EACL Workshop on Making Sense of Sense: Bringing Computational Linguistics and Psycholinguistics Together, pages 1–8. Vahed Qazvinian, Dragomir R. Radev, and Arzucan ¨Ozg¨ur. 2010. Citation summarization through keyphrase extraction. In Proc. COLING, pages 895– 903. Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-T¨ur. 2010. Long story short—Global unsupervised models for keyphrase based meeting summarization. Speech Commun., 52(10):801–815. Satoshi Sekine and Chikashi Nobata. 2003. A survey for multi-document summarization. In Proc. HLTNAACL Workshop on Text Summarization, pages 65–72. Beaux Sharifi, Mark-Anthony Hutton, and Jugal Kalita. 2010. Summarizing microblogs automatically. In Proc. HLT/NAACL, pages 685–688. Chao Shen and Tao Li. 2010. Multi-document summarization via the minimum dominating set. In Proc. COLING, pages 984–992. Hiroya Takamura and Manabu Okumura. 2009. Text summarization model based on maximum coverage problem and its variant. In Proc. EACL, pages 781– 789. Koji Yatani, Michael Novati, Andrew Trusty, and Khai N. Truong. 2011. Review spotlight: A user interface for summarizing user-generated reviews using adjective-noun word pairs. In Proc. CHI, pages 1541–1550. 1022
2013
100
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1023–1032, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Subtree Extractive Summarization via Submodular Maximization Hajime Morita Tokyo Institute of Technology, Japan [email protected] Hiroya Takamura Tokyo Institute of Technology, Japan [email protected] Ryohei Sasano Tokyo Institute of Technology, Japan [email protected] Manabu Okumura Tokyo Institute of Technology, Japan [email protected] Abstract This study proposes a text summarization model that simultaneously performs sentence extraction and compression. We translate the text summarization task into a problem of extracting a set of dependency subtrees in the document cluster. We also encode obligatory case constraints as must-link dependency constraints in order to guarantee the readability of the generated summary. In order to handle the subtree extraction problem, we investigate a new class of submodular maximization problem, and a new algorithm that has the approximation ratio 1 2(1 −e−1). Our experiments with the NTCIR ACLIA test collections show that our approach outperforms a state-of-the-art algorithm. 1 Introduction Text summarization is often addressed as a task of simultaneously performing sentence extraction and sentence compression (Berg-Kirkpatrick et al., 2011; Martins and Smith, 2009). Joint models of sentence extraction and compression have a great benefit in that they have a large degree of freedom as far as controlling redundancy goes. In contrast, conventional two-stage approaches (Zajic et al., 2006), which first generate candidate compressed sentences and then use them to generate a summary, have less computational complexity than joint models. However, two-stage approaches are suboptimal for text summarization. For example, when we compress sentences first, the compressed sentences may fail to contain important pieces of information due to the length limit imposed on each sentence. On the other hand, when we extract sentences first, an important sentence may fail to be selected, simply because it is long. Enumerating a huge number of compressed sentences is also infeasible. Joint models can prune unimportant or redundant descriptions without resorting to enumeration. Meanwhile, submodular maximization has recently been applied to the text summarization task, and the methods thereof have performed very well (Lin and Bilmes, 2010; Lin and Bilmes, 2011; Morita et al., 2011). Formalizing summarization as a submodular maximization problem has an important benefit inthat the problem can be solved by using a greedy algorithm with a performance guarantee. We therefore decided to formalize the task of simultaneously performing sentence extraction and compression as a submodular maximization problem. That is, we extract subsentences for making the summary directly from all available subsentences in the documents and not in a stepwise fashion. However, there is a difficulty with such a formalization. In the past, the resulting maximization problem has been often accompanied by thousands of linear constraints representing logical relations between words. The existing greedy algorithm for solving submodular maximization problems cannot work in the presence of such numerous constraints although monotone and nonmonotone submodular maximization with constraints other than budget constraints have been studied (Lee et al., 2009; Kulik et al., 2009; Gupta et al., 2010). In this study, we avoid this difficulty by reducing the task to one of extracting dependency subtrees from sentences in the source documents. The reduction replaces the difficulty of numerous linear constraints with another difficulty wherein two subtrees can share the same word to1023 ken when they are selected from the same sentence, and as a result, the cost of the union of the two subtrees is not always the mere sum of their costs. We can overcome this difficulty by tackling a new class of submodular maximization problem: a budgeted monotone nondecreasing submodular function maximization with a cost function, where the cost of an extraction unit varies depending on what other extraction units are selected. By formalizing the subtree extraction problem as this new maximization problem, we can treat the constraints regarding the grammaticality of the compressed sentences in a straightforward way and use an arbitrary monotone submodular word score function for words including our word score function (shown later). We also propose a new greedy algorithm that solves this new class of maximization problem with a performance guarantee 1 2(1 −e−1). We evaluated our method on by using it to perform query-oriented summarization (Tang et al., 2009). Experimental results show that it is superior to state-of-the-art methods. 2 Related Work Submodularity is formally defined as a property of a set function for a finite universe V . The function f : 2V →R maps a subset S ⊆V to a real value. If for any S, T ⊆V , f(S ∪T) + f(S ∩T) ≤ f(S)+f(T), f is called submodular. This definition is equivalent to that of diminishing returns, which is well known in the field of economics: f(S ∪{u}) −f(S) ≤f(T ∪{u}) −f(T), where T ⊆S ⊆V and u is an element of V . Diminishing returns means that the value of an element u remains the same or decreases as S becomes larger. This property is suitable for summarization purposes, because the gain of adding a new sentence to a summary that already contains sufficient information should be small. Therefore, many studies have formalized text summarization as a submodular maximization problem (Lin and Bilmes, 2010; Lin and Bilmes, 2011; Morita et al., 2011). Their approaches, however, have been based on sentence extraction. To our knowledge, there is no study that addresses the joint task of simultaneously performing compression and extraction through an approximate submodular maximization with a performance guarantee. In the field of constrained maximization problems, Kulik et al. (2009) proposed an algorithm that solves the submodular maximization problem under multiple linear constraints with a performance guarantee 1 −e−1 in polynomial time. Although their approach can represent more flexible constraints, we cannot use their algorithm to solve our problem, because their algorithm needs to enumerate many combinations of elements. Integer linear programming (ILP) formulations can represent such flexible constraints, and they are commonly used to model text summarization (McDonald, 2007). Berg-Kirkpatrick et al. (2011) formulated a unified task of sentence extraction and sentence compression as an ILP. However, it is hard to solve large-scale ILP problems exactly in a practical amount of time. 3 Budgeted Submodular Maximization with Cost Function 3.1 Problem Definition Let V be the finite set of all valid subtrees in the source documents, where valid subtrees are defined to be the ones that can be regarded as grammatical sentences. In this paper, we regard subtrees containing the root node of the sentence as valid. Accordingly, V denotes a set of all rooted subtrees in all sentences. A subtree contains a set of elements that are units in a dependency structure (e.g., morphemes, words or clauses). Let us consider the following problem of budgeted monotone nondecreasing submodular function maximization with a cost function: maxS⊆V {f(S) : c (S) ≤L} , where S is a summary represented as a set of subtrees, c(·) is the cost function for the set of subtrees, L is our budget, and the submodular function f(·) scores the summary quality. The cost function is not always the sum of the costs of the covered subtrees, but depends on the set of the covered elements by the subtrees. Here, we will assume that the generated summary has to be as long as or shorter than the given summary length limit, as measured by the number of characters. This means the cost of a subtree is the integer number of characters it contains. V is partitioned into exclusive subsets B of valid subtrees, and each subset corresponds to the original sentence from which the valid subtrees derived. However, the cost of a union of subtrees from different sentences is simply the sum of the costs of subtrees, while the cost of a union of subtrees from the same sentence is smaller than the sum of the costs. Therefore, the problem can be represented as follows: 1024 max S⊆V { f(S) : ∑ B∈B c (B ∩S) ≤L } . (1) For example, if we add a subtree t containing words {wa, wb, wc} to a summary that already covers words {wa, wb, wd} from the same sentence, the additional cost of t is only c({wc}) because wa and wb are already covered1. The problem has two requirements. The first requirement is that the union of valid subtrees is also a valid subtree. The second requirement is that the union of subtrees and a single valid subtree have the same score and the same cost if they cover the same elements. We will refer to the single valid subtree as the equivalent subtree of the union of subtrees. These requirements enable us to represent sentence compression as the extraction of subtrees from a sentence. This is because the requirements guarantee that the extracted subtrees represent a sentence. 3.2 Greedy Algorithm We propose Algorithm 1 that solves the maximization problem (Eq.1). The algorithm is based on ones proposed by Khuller et al. (1999) and Krause et al. (2005). Instead of enumerating all candidate subtrees, we use a local search to extract the element that has the highest gain per cost. In the algorithm, Gi indicates a summary set obtained by adding element si to Gi−1. U means the set of subtrees that are not extracted. The algorithm iteratively adds to the current summary the element si that has the largest ratio of the objective function gain to the additional cost, unless adding it violates the budget constraint. We set a parameter r that is the scaling factor proposed by Lin and Bilmes (2010). After the loop, the algorithm compares Gi with the {s∗} that has the largest value of the objective function among all subtrees that are under the budget, and it outputs the summary candidate with the largest value. Let us analyze the performance guarantee of Algorithm 12. 1Each subset B corresponds to a kind of greedoid constraint. V implicitly constrains the model such that it can only select valid subtrees from a set of nodes and edges. 2Our performance guarantee is lower than that reported by Lin and Bilmes (2010). However, their proof is erroneous. In their proof of Lemma 2, they derive ∀u ∈ S∗\Gi−1, ρu(Gi−1) Cru ≤ ρvi (Gi−1) Crvi , for any i(1 ≤i ≤|G|), from line 4 of their Algorithm 1, which selects the densest element out of all available elements. However, the inequality does not hold for i, for which element u selected on line 4 is discarded on line 5 of their algorithm. The performance guarantee of their algorithm is actually the same as ours, since Algorithm 1 Modified greedy algorithm for budgeted submodular function maximization with a cost function . 1: G0 ←φ 2: U ←V 3: i ←1 4: while U ̸= φ do 5: si ←arg maxs∈U f(Gi−1∪{s})−f(Gi−1) (c(Gi−1∪{s})−c(Gi−1))r 6: if c({si} ∪Gi−1) ≤L then 7: Gi ←Gi−1 ∪{si} 8: i ←i + 1 9: end if 10: U ←U\{si} 11: end while 12: ¯s ←arg maxs∈V,c(s)≤L f({s}) 13: return Gf = arg maxS∈{{¯s},Gi} f(S) Theorem 1 For a normalized monotone submodular function f(·), Algorithm 1 has a constant approximation factor when r = 1 as follows: f(Gf) ≥ (1 2(1 −e−1) ) f(S∗), (2) where S∗is the optimal solution and, Gf is the solution obtained by Greedy Algorithm 1. Proof. See appendix. 3.3 Relation with Discrete Optimization We argue that our optimization problem can be regarded as an extraction of subtrees rooted at a given node from a directed graph, instead of from a tree. Let D be the set of edges of the directed graph, F be a subset of D that is a subtree. In the field of combinatorial optimization, a pair (D, F) is a kind of greedoid: directed branching greedoid (Schmidt, 1991). A greedoid is a generalization of the matroid concept. However, while matroids are often used to represent constraints on submodular maximization problems (Conforti and Cornu´ejols, 1984; Calinescu et al., 2011), greedoids have not been used for that purpose, in spite of their high representation ability. To our knowledge, this is the first study that gives a constant performance guarantee for the submodular maximization under greedoid (non-matroid) constraints. the guarantee 1 2(1 −e−1) was already proved by Krause and Guestrin (2005). We show a counterexample. Suppose that V is { e1(density 4:cost 6), e2(density 2:cost 4), e3(density 3:cost 1), e4(density 1:cost 1) }, and cost limit K is 10. The optimal solution is S∗= {e1, e2}. Their algorithm selects e1, e3, e4 in this order. However the algorithm selects e2 on line 4 after selecting e3, and it drops e2 on line 5. As a result, e4 selected by the algorithm does not satisfy the inequality ∀u ∈S∗\Gi−1, ρu(Gi−1) Cru ≤ ρvi (Gi−1) Crvi . 1025 4 Joint Model of Extraction and Compression We will formalize the unified task of sentence compression and extraction as a budgeted monotone nondecreasing submodular function maximization with a cost function. In this formalization, a valid subtree of a sentence represents a candidate of a compressed sentence. We will refer to all valid subtrees of a given sentence as a valid set. A valid set corresponds to all candidates of the compression of a sentence. Note that although we use the valid set in the formalization, we do not have to enumerate all the candidates for each sentence. Since, from the requirements, the union of valid subtrees is also a valid subtree in the valid set, the model can extract one or more subtrees from one sentence, and generate a compressed sentence by merging those subtrees to generate an equivalent subtree. Therefore, the joint model can extract an arbitrarily compressed sentence as a subtree without enumerating all candidates. The joint model can remove the redundant part as well as the irrelevant part of a sentence, because the model simultaneously extracts and compresses sentences. We can approximately solve the subtree extraction problem by using Algorithm 1. On line 5 of the algorithm, the subtree extraction is performed as a local search that finds maximal density subtrees from the whole documents. The maximal density subtree is a subtree that has the highest score per cost of subtree. We use a cost function to represent the cost, which indicates the length of word tokens in the subtree. In this paper, we address the task of summarization of Japanese text by means of sentence compression and extraction. In Japanese, syntactic subtrees that contain the root of the dependency tree of the original sentence often make grammatical sentences. This means that the requirements mentioned in Section 3.1 that a union of valid subtrees is a valid and equivalent tree is often true for Japanese. The root indicates the predicate of a sentence, and it is syntactically modified by other prior words. Some modifying words can be pruned. Therefore, sentence compression can be represented as edge pruning. The linguistic units we extract are bunsetsu phrases, which are syntactic chunks often containing a functional word after one or more content words. We will refer to bunsetsu phrases as phrases for simplicity. Since Japanese syntactic dependency is generally defined between two phrases, we use the phrases as the nodes of subtrees. In this joint model, we generate a compressed sentence by extracting an arbitrary subtree from a dependency tree of a sentence. However, not all subtrees are always valid. The sentence generated by a subtree can be unnatural even though the subtree contains the root node of the sentence. To avoid generating such ungrammatical sentences, we need to detect and retain the obligatory dependency relations in the dependency tree. We address this problem by imposing must-link constraints if a phrase corresponds to an obligatory case of the main predicate. We merge obligatory phrases with the predicate beforehand so that the merged nodes make a single large node. Although we focus on Japanese in this paper, our approach can be applied to English and other languages if certain conditions are satisfied. First, we need a dependency parser of the language in order to represent sentence compression as dependency tree pruning. Moreover, although, in Japanese, obligatory cases distinguish which edges of the dependency tree can be pruned or not, we need another technique to distinguish them in other languages. For example we can distinguish obligatory phrases from optional ones by using semantic role labeling to detect arguments of predicates. The adaptation to other languages is left for future work. 4.1 Objective Function We extract subtrees from sentences in order to solve the query-oriented summarization problem as a unified one consisting of sentence compression and extraction. We thus need to allocate a query relevance score to each node. Off-the-shelf similarity measures such as the cosine similarity of bag-of-words vectors with query terms would allocate scores to the terms that appear in the query, but would give no scores to terms that do not appear in it. With such a similarity, sentence compression extracts nearly only the query terms and fails to contain important information. Instead, we used Query SnowBall (QSB) (Morita et al., 2011) to calculate the query relevance score of each phrase. QSB is a method for query-oriented summarization, which calculates the similarity between query terms and each word by using cooccurrences within the source documents. Although the authors of QSB also provided scores of word pairs to avoid putting excessive penalties 1026 on word overlaps, we do not score word pairs. The score function is supermodular as a score function of subtree extraction3, because the union of two subtrees can have extra word pairs that are not included in either subtree. If the extra pair has a positive score, the score of the union is greater than the sum of the score of the subtrees. This violates the definition of submodularity, and invalidates the performance guarantee of our algorithms. We designed our objective function by combining this relevance score with a penalty for redundancy and too-compressed sentences. Important words that describe the main topic should occur multiple times in a good summary. However, excessive overlap undermines the quality of a summary, as do irrelevant words. Therefore, the scores of overlapping words should be lower than thoseof new words. The behavior can be represented by a submodular objective function that reduces word scores depending on those already included in the summary. Furthermore, a summary consisting of many too-compressed sentences would lack readability. We thus gives a positive reward to long sentences. The positive reward leads to a natural summary being generated with fewer sentences and indirectly penalizes too short sentences. Our positive reward for long sentences is represented as reward(S) = c(S) −|S|, (3) where c(S) is the cost of summary S, and |S| is the number of sentences in S. Since a sentence must contain more than one character, the reward consistently gives a positive score, and gives a higher score to a summary that consists of fewer sentences. Let d be the damping rate, countS(w) be the number of sentences containing word w in summary S, words(S) be the set of words included in summary S, qsb(w) be the query relevance score of word w, and γ be a parameter that adjusts the rate of sentence compression. Our score function for a summary S is as follows: f(S) = ∑ w∈words(S)    countS(w)−1 ∑ i=0 qsb(w)di   + γ reward(S). (4) An optimization problem with this objective function cannot be regarded as an ILP problem because it contains non-linear terms. It is also ad3The score is still submodular for the purpose of sentence extraction. vantageous that the submodular maximization can deal with such objective functions. Note that the objective function is such that it can be calculated according to the type of word. Due to the nature of the objective function, we can use dynamic programming to effectively search for the subtree with the maximal density. 4.2 Local Search for Maximal Density Subtree Let us now discuss the local search used on line 5 of Algorithm 1. We will use a fast algorithm to find the maximal density subtree (MDS) of a given sentence for each cost in Algorithm 1. Consider the objective function Eq. 4, We can ignore the second term of the reward function while looking for the MDS in a sentence because the number of sentences is the same for every MDS in a sentence. That is, the gain function of adding a subtree to a summary can be represented as the sum of gains for words: g(t) = ∑ w∈t {gainS(w) + freqt(w)c(w)γ}, gainS(w) = qsb(w)dcountS(w), where freqt(w) is the number of ws in subtree t, and gainS(w) is the gain of adding the word w to the summary S. Our algorithm is based on dynamic programming, and it selects a subtree that maximizes the gain function per cost. When the word gain is a constant, the algorithm proposed by Hsieh et al. (2010) can be used to find the MDS. We extended this algorithm to work for submodular word gain functions that are not constant. Note that the gain of a word that occurs only once in the sentence, can be treated as a constant. In what follows, we will describe an extended algorithm to find the MDS even if there is word overlap. For example, let us describe how to obtain the MDS in the case of a binary tree. First let us tackle the case in which the gain is always constant. Let n be a node in the tree, a and b be child nodes of n, c(n) be the cost of n, mdsc a be the MDS rooted at a and have cost c. mdsn = {mdsc(n) n , . . . , mdsL n} denotes the set of MDSs for each cost and its root node n. The valid subtrees rooted at n can be obtained by taking unions of n with one or both of t1 ∈mdsa and t2 ∈mdsb. mdsc n is the union that has the largest gain over the union with the cost of c (by enumerating all the unions). The MDS for 1027 the sentence root can be found by calculating each mdsc n from the bottom of the tree to the top. Next, let us consider the objective function that returns the sum of values of submodular word gain functions. When there is no word overlap within the union, we can obtain mdsc n in the same manner as for the constant gain. In contrast, if the union includes word overlap, the gain is less than the sum of gains: g(mdsc n) ≤g(n) + g(mdsk a) + g(mdsc−k−c(n) b ), where k and c are variables. The score reduction can change the order of the gains of the union. That is, it is possible that another union without word overlaps will have a larger gain. Therefore, the algorithm needs to know whether each t ∈mdsn has the potential to have word overlaps with other MDSs. Let O be the set of words that occur twice or more in the sentence on which the local seach focuses. The algorithm stores MDS for each o ⊆O, as well as each cost. By storing MDS for each o and cost as shown in Fig. 1, the algorithm can find MDS with the largest gain over the combinations of subtrees. Algorithm 2 shows the procedure. In it, t and m denote subtrees, words(t) returns a set of words in the subtree, g(t) returns the gain of t, tree(n) means a tree consisting of node n, and t ∪m denotes the union of subtrees: t and m. subt indicates a set of current maximal density subtrees among the combinations calculated before. newt indicates a set of temporary maximal density subtrees for the combinations calculated from line 4 to 8. subt[cost,ws] indicates a element of subt that has a cost cost and contains a set of words ws. newt[cost,ws] is defined similarly. Line 1 sets subt to a set consisting of a subtree that indicates node n itself. The algorithm calculates maximal density subtrees within combinations of the root node n and MDSs rooted at child nodes of n. Line 3 iteratively adds MDSs rooted at a next child node to the combinations; the algorithm then calculates MDSs newt between subt and the MDSs of the child node. The procedure from line 6 to 8 selects a subtree that has a larger gain from the temporary maximal subtree and the union of t and m. The computational complexity of this algorithm is O(NC2) when there is no word overlap within the sentence, where C denotes the cost of the whole sentence, and N denotes the number of nodes in the sentence. The complexity order is the same as that of the algorithm of Hsieh et al. (2010). When we treat word overlaps, we need to count Algorithm 2 Algorithm for finding maximal density subtree for each cost: MDSs. Function: MDSs Require: root node n 1: subt[c(n),words(n)∩O] = tree(n) 2: newt = φ 3: for i ∈child node of n do 4: for t ∈MDSs(i) do 5: for m ∈subt do 6: index = [c(t ∪m), words(t ∪m) ∩O] 7: newtindex = arg maxj∈{newtindex,t∪m} g(j) 8: end for 9: end for 10: subt = newt 11: end for 12: return subt Figure 1: Maximal density subtree extraction. The right table enumerates the subtrees rooted at w2 in the left tree for all indices. The number in each tree node is the score of the word. all unions of combinations of the stored MDSs. There are at most (C2|O|) MDSs that the algorithm needs to store at each node. Therefore the total computational complexity is O(NC222|O|). Since it is unlikely that a sentence contains many word tokens of one type, the computational cost may not be so large in practical situations. 5 Experimental Settings We evaluate our method on Japanese QA test collections from NTCIR-7 ACLIA1 and NTCIR8 ACLIA2 (Mitamura et al., 2008; Mitamura et al., 2010). The collections contain questions and weighted answer nuggets. Our experimental settings followed the settings of (Morita et al., 2011), except for the maximum summary length. We generated summaries consisting of 140 Japanese characters or less, with the question as the query terms. We did this because our aim is to use our method in mobile situations. We used “ACLIA1 test data” to tune the parameters, and evaluated our method on “ACLIA2 test” data. We used JUMAN (Kurohashi and Kawahara, 2009a) for word segmentation and part-of-speech tagging, and we calculated idf over Mainichi newspaper articles from 1991 to 2005. For the de1028 POURPRE Precision Recall F1 F3 Lin and Bilmes (2011) 0.215 0.126 0.201 0.135 0.174 Subtree extraction (SbE) 0.268 0.238 0.213 0.159 0.190 Sentence extraction (NC) 0.278 0.206 0.215 0.139 0.183 Table 1: Results on ACLIA2 test data. pendency parsing, we used KNP (Kurohashi and Kawahara, 2009b). Since KNP internally has a flag that indicates either an “obligatory case” or an “adjacent case”, we regarded dependency relations flagged by KNP as obligatory in the sentence compression. KNP utilizes Kyoto University’s case frames (Kawahara and Kurohashi, 2006) as the resource for detecting obligatory or adjacent cases. To evaluate the summaries, we followed the practices of the TAC summarization tasks (Dang, 2008) and NTCIR ACLIA tasks, and computed pyramid-based precision with the allowance parameter, recall, and Fβ (where β is 1 or 3) scores. The allowance parameter was determined from the average nugget length for each question type of the ACLIA2 collection (Mitamura et al., 2010). Precision and recall are computed from the nuggets that the summary covered along with their weights. One of the authors of this paper manually evaluated whether each nugget matched the summary. We also used the automatic evaluation measure, POURPRE (Lin and Demner-Fushman, 2006). POURPRE is based on word matching of reference nuggets and system outputs. We regarded as stopwords the most frequent 100 words in Mainichi articles from 1991 to 2005 (the document frequency was used to measure the frequency). We also set the threshold of nugget matching as 0.5 and binarized the nugget matching, following the previous study (Mitamura et al., 2010). We tuned the parameters by using POURPRE on the development dataset. Lin and Bilmes (2011) designed a monotone submodular function for query-oriented summarization. Their succinct method performed well in DUC from 2004 to 2007. They proposed a positive diversity reward function in order to define a monotone submodular objective function for generating a non-redundant summary. The diversity reward gives a smaller gain for a biased summary, because it consists of gains based on three clusters and calculates a square root score with respect to each sentence. The reward also contains a score for the similarity of a sentence to the query, for purposes of query-oriented summaRecall Length # of nuggets Subtree extraction 0.213 11,143 100 Reconstructed (RC) 0.228 13,797 108 Table 2: Effect of sentence compression. rization. Their objective function also includes a coverage function based on the similarity wi,j between sentences. In the coverage function min function limits the maximum gain α ∑ i∈V wi,j, which is a small fraction α of the similarity between a sentence j and the all source documents. The objective function is the sum of the positive reward R and the coverage function L over the source documents V , as follows: F(S) = L(S) + 3 ∑ k=1 λkRQ,k(S), L(S) = ∑ i∈V min    ∑ j∈S wi,j, α ∑ k∈V wi,k   , RQ,k = ∑ c∈Ck v u u t ∑ j∈S∪c ( β N ∑ i∈V wi,j + (1 −β)rj,Q), where α, β and λk are parameters, and rj,Q represents the similarity between sentence j and query Q. We tuned the parameters on the development dataset. Lin and Bilmes (2011) used three clusters Ck with different granularities, which were calculated in advance. We set the granularity to (0.2N, 0.15N, 0.05N) according to the settings of them, where N is the number of sentences in a document. We also regarded as stopwords “教える(tell),” “知る(know),” “何(what)” and their conjugated forms, which are excessively common in questions. For the query expansion in the baseline, we used Japanese WordNet to obtain synonyms and hypernyms of query terms. 6 Results Table 1 summarizes our results. “Subtree extraction (SbE)” is our method, and “Sentence extraction (NC)” is a version of our method without compression. The NC has the same objective function but only extracts sentences. The F1measure and F3-measure of our method are 0.159 and 0.190 respectively, while those of the state-of1029 the-art baseline are 0.135 and 0.174 respectively. Unfortunately, since the document set is small, the difference is not statistically significant. Comparing our method with the one without compression, we can see that there are improvements in the F1 and F3 scores of the human evaluation, whereas the POURPRE score of the version of our method without compression is higher than that of our method with compression. The compression improved the precision of our method, but slightly decreased the recall. For the error analyses, we reconstructed the original sentences from which our method extracted the subtrees. Table 2 shows the statistics of the summaries of SbE and reconstructed summaries (RC). The original sentences covered 108 answer nuggets in total, and 8 of these answer nuggets were dropped by the sentence compression. Comparing the results of SbE and RC, we can see that the sentence compression caused the recall of SbE to be 7% lower than that of RC. However, the drop is relatively small in light of the fact that the sentence compression can discard 19% of the original character length with SbE. This suggests that the compression can efficiently prune words while avoiding pruning informative content. Since the summary length is short, we can select only two or three sentences for a summary. As Morita et al. (2011) mentioned, answer nuggets overlap each other. The baseline objective function R tends to extract sentences from various clusters. If the answer nuggets are present in the same cluster, the objective function does not fit the situation. However, our methods (SbE and NC) have a parameter d that can directly adjust overlap penalty with respect to word importance as well as query relevance. This may help our methods to cover similar answer nuggets. In fact, the development data resulted in a relatively high parameter d (0.8) for NC compared with 0.2 for SbE. 7 Conclusions and Future Work We formalized a query-oriented summarization, which is a task in which one simultaneously performs sentence compression and extraction, as a new optimization problem: budgeted monotone nondecreasing submodular function maximization with a cost function. We devised an approximate algorithm to solve the problem in a reasonable computational time and proved that its approximation rate is 1 2(1 −e−1). Our approach achieved an F3-measure of 0.19 on the ACLIA2 Japanese test collection, which is 9.2 % improvement over a state-of-the-art method using a submodular objective function. Since our algorithm requires that the objective function is the sum of word score functions, our proposed method has a restriction that we cannot use an arbitrary monotone submodular function as the objective function for the summary. Our future work will improve the local search algorithm to remove this restriction. As mentioned before, we also plan to adapt of our system to other languages. Appendix Here, we analyze the performance guarantee of Algorithm 1. We use the following notation. S∗is the optimal solution, cu(S) is the residual cost of subtree u when S is already covered, and i∗is the last step before the algorithm discards a subtree s ∈S∗or a part of the subtree s. This is because the subtree does not belong to either the approximate solution or the optimal solution. We can remove the subtree s′ from V without changing the approximate rate. si is the i-th subtree obtained by line 5 of Algorithm 1. Gi is the set obtained after adding subtree si to Gi−1 from the valid set Bi. Gf is the final solution obtained by Algorithm 1. f(·) : 2V →R is a monotone submodular function. We assume that there is an equivalent subtree with any union of subtrees in a valid set B: ∀t1, t2, ∃te, te ≡{t1, t2}. Note that for any order of the set, the cost or profit of the set is fixed: ∑ ui∈S={u1,...,u|S|} cui(Si−1) = c(S). Lemma 1 ∀X, Y ⊆ V, f(X) ≤ f(Y ) + ∑ u∈X\Y ρu(Y ), where ρu(S) = f(S ∪{u}) − f(S). The inequality can be derived from the definition of submodularity. 2 Lemma 2 For i = 1, . . . , i∗+1, when 0 ≤r ≤1, f(S∗)−f(Gi−1)≤Lr|S∗|1−r csi (Gi−1) (f(Gi−1∪{si})−f(Gi−1)), where cu(S)=c(S∪{u})−c(S). Proof. From line 5 of Algorithm 1, we have ∀u ∈S∗\Gi−1, ρu(Gi−1) cu(Gi−1)r ≤ρsi(Gi−1) csi(Gi−1)r . Let B be a valid set, and union be a function that returns the union of subtrees. We have 1030 ∀T ⊆B, ∃b ∈B, b = union(T), because we have an equivalent tree b ∈B for each union of trees T in a valid set B. That is, for any set of subtrees, we have an equivalent set of subtrees, where bi ∈Bi. Without loss of generality, we can replace the difference set S∗\Gi−1 with a set T ′ i−1 = {b0, . . . , b|T ′ i−1|} that does not contain any two elements extracted from the same valid set. Thus when 0 ≤r ≤1 and 0 ≤ i ≤i∗+ 1, ρs∗\Gi−1(Gi−1) cS∗\Gi−1(Gi−1)r = ρT ′ i−1(Gi−1) cT ′ i−1(Gi−1)r , and ∀bj ∈T ′ i−1, ρbj (Gi−1) cbj (Gi−1)r ≤ρsi(Gi−1) csi(Gi−1)r . Thus, ρT ′ i−1(Gi−1) = ∑ u∈T ′ i−1 ρu(Gi−1) ≤ ρsi (Gi−1) csi (Gi−1)r ∑ u∈T ′ i−1 cu(Gi−1)r ≤ ρsi (Gi−1) csi (Gi−1)r |T ′ i−1| ( ∑ u∈T ′ i−1 cu(Gi−1) |T ′ i−1| )r ≤ ρsi (Gi−1) csi (Gi−1)r |T ′ i−1|1−r (∑ u∈T ′ i−1 cu(φ) )r ≤ ρsi (Gi−1) csi (Gi−1)r |S∗|1−rLr, where the second inequality is from H¨older’s inequality. The third inequality uses the submodularity of the cost function, cu(Gi−1) = c({u} ∪Gi−1) −c(Gi−1) ≤cu(φ) and the fact that |S∗| ≥|S∗\Gi−1| ≥|T ′ i−1|, and ∑ u∈T ′ i−1 cu(φ) = c(T ′ i−1) ≤L . As a result, we have ρs∗\Gi−1(Gi−1) = ρT ′ i−1(Gi−1) ≤ ρsi(Gi−1) csi(Gi−1)r |S∗|1−rLr. Let X = S∗and Y = Gi−1. Applying Lemma 1 yields f(S∗) ≤ f(Gi−1) + ρu∈S∗\Gi−1(Gi−1). ≤ f(Gi−1) + ρsi(Gi−1) csi(Gi−1) |S∗|1−rLr. The lemma follows as a result. Lemma 3 For a normalized monotone submodular f(·), for i = 1, . . . , i∗+ 1 and 0 ≤r ≤1 and letting si be the i-th unit added into G and Gi be the set after adding si, we have f(Gi) ≥ ( 1 − i∏ k=1 ( 1 −csk(Gk−1)r Lr|S∗|1−r )) f(S∗). Proof. This is proved similarly to Lemma 3 of (Krause and Guestrin, 2005) using Lemma 2. Proof of Theorem 1. This is proved similarly to Theorem 1 of (Krause and Guestrin, 2005) using Lemma 3. References Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 481–490, Stroudsburg, PA, USA. Association for Computational Linguistics. Calinescu Calinescu, Chandra Chekuri, Martin P´al, and Jan Vondr´ak. 2011. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740–1766. Michele Conforti and G´erard Cornu´ejols. 1984. Submodular set functions, matroids and the greedy algorithm: Tight worst-case bounds and some generalizations of the rado-edmonds theorem. Discrete Applied Mathematics, 7(3):251 – 274. Hoa Trang Dang. 2008. Overview of the tac 2008 opinion question answering and summarization tasks. In Proceedings of Text Analysis Conference. Anupam Gupta, Aaron Roth, Grant Schoenebeck, and Kunal Talwar. 2010. Constrained non-monotone submodular maximization: offline and secretary algorithms. In Proceedings of the 6th international conference on Internet and network economics, WINE’10, pages 246–257, Berlin, Heidelberg. Springer-Verlag. Sun-Yuan Hsieh and Ting-Yu Chou. 2010. The weight-constrained maximum-density subtree problem and related problems in trees. The Journal of Supercomputing, 54(3):366–380, December. Daisuke Kawahara and Sadao Kurohashi. 2006. A fully-lexicalized probabilistic model for japanese syntactic and case structure analysis. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL ’06, pages 176–183, Stroudsburg, PA, USA. Association for Computational Linguistics. Samir Khuller, Anna Moss, and Joseph S. Naor. 1999. The budgeted maximum coverage problem. Information Processing Letters, 70(1):39–45. Andreas Krause and Carlos Guestrin. 2005. A note on the budgeted maximization on submodular functions. Technical Report CMU-CALD-05-103, Carnegie Mellon University. 1031 Ariel Kulik, Hadas Shachnai, and Tami Tamir. 2009. Maximizing submodular set functions subject to multiple linear constraints. In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’09, pages 545–554, Philadelphia, PA, USA. Society for Industrial and Applied Mathematics. Sadao Kurohashi and Daisuke Kawahara, 2009a. Japanese Morphological Analysis System JUMAN 6.0 Users Manual. http://nlp.ist.i. kyoto-u.ac.jp/EN/index.php?JUMAN. Sadao Kurohashi and Daisuke Kawahara, 2009b. KN parser (Kurohashi-Nagao parser) 3.0 Users Manual. http://nlp.ist.i.kyoto-u.ac.jp/ EN/index.php?KNP. Jon Lee, Vahab S. Mirrokni, Viswanath Nagarajan, and Maxim Sviridenko. 2009. Non-monotone submodular maximization under matroid and knapsack constraints. In Proceedings of the 41st annual ACM symposium on Theory of computing, STOC ’09, pages 323–332, New York, NY, USA. ACM. Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 912–920, Stroudsburg, PA, USA. Association for Computational Linguistics. Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 510–520, Stroudsburg, PA, USA. Association for Computational Linguistics. Jimmy Lin and Dina Demner-Fushman. 2006. Methods for automatically evaluating answers to complex questions. Information Retrieval, 9(5):565– 587, November. Andr´e F. T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP ’09, pages 1–9, Stroudsburg, PA, USA. Association for Computational Linguistics. Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Proceedings of the 29th European conference on IR research, ECIR’07, pages 557–564, Berlin, Heidelberg. Springer-Verlag. Teruko Mitamura, Eric Nyberg, Hideki Shima, Tsuneaki Kato, Tatsunori Mori, Chin-Yew Lin, Ruihua Song, Chuan-Jie Lin, Tetsuya Sakai, Donghong Ji, and Noriko Kando. 2008. Overview of the NTCIR-7 ACLIA Tasks: Advanced Cross-Lingual Information Access. In Proceedings of the 7th NTCIR Workshop. Teruko Mitamura, Hideki Shima, Tetsuya Sakai, Noriko Kando, Tatsunori Mori, Koichi Takeda, Chin-Yew Lin, Ruihua Song, Chuan-Jie Lin, and Cheng-Wei Lee. 2010. Overview of the ntcir-8 aclia tasks: Advanced cross-lingual information access. In Proceedings of the 8th NTCIR Workshop. Hajime Morita, Tetsuya Sakai, and Manabu Okumura. 2011. Query snowball: a co-occurrence-based approach to multi-document summarization for question answering. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, HLT ’11, pages 223–229, Stroudsburg, PA, USA. Association for Computational Linguistics. Wolfgang Schmidt. 1991. Greedoids and searches in directed graphs. Discrete Mathmatics, 93(1):75–88, November. Jie Tang, Limin Yao, and Dewei Chen. 2009. Multitopic based query-oriented summarization. In Proceedings of 2009 SIAM International Conference Data Mining (SDM’2009), pages 1147–1158. David M. Zajic, Bonnie J. Dorr, Jimmy Lin, and Richard Schwartz. 2006. Sentence compression as a component of a multi-document summarization system. In Proceedings of the 2006 Document Understanding Conference (DUC 2006) at NLT/NAACL 2006. 1032
2013
101
The effect of non-tightness on Bayesian estimation of PCFGs Shay B. Cohen Department of Computer Science Columbia University [email protected] Mark Johnson Department of Computing Macquarie University [email protected] Abstract Probabilistic context-free grammars have the unusual property of not always defining tight distributions (i.e., the sum of the “probabilities” of the trees the grammar generates can be less than one). This paper reviews how this non-tightness can arise and discusses its impact on Bayesian estimation of PCFGs. We begin by presenting the notion of “almost everywhere tight grammars” and show that linear CFGs follow it. We then propose three different ways of reinterpreting non-tight PCFGs to make them tight, show that the Bayesian estimators in Johnson et al. (2007) are correct under one of them, and provide MCMC samplers for the other two. We conclude with a discussion of the impact of tightness empirically. 1 Introduction Probabilistic Context-Free Grammars (PCFGs) play a special role in computational linguistics because they are perhaps the simplest probabilistic models of hierarchical structures. Their simplicity enables us to mathematically analyze their properties to a detail that would be difficult with linguistically more accurate models. Such analysis is useful because it is reasonable to expect more complex models to exhibit similar properties as well. The problem of inferring PCFG rule probabilities from training data consisting of yields or strings alone is interesting from both cognitive and engineering perspectives. Cognitively it is implausible that children can perceive the parse trees of the language they are learning, but it is more reasonable to assume that they can obtain the terminal strings or yield of these trees. Unsupervised methods for learning a grammar from terminal strings alone is also interesting from an engineering perspective because such training data is cheap and plentiful, while the manually parsed data required by supervised methods are expensive to produce and relatively rare. Cohen and Smith (2012) show that inferring PCFG rule probabilities from strings alone is computationally intractable, so we should not expect to find an efficient, general-purpose algorithm for the unsupervised problem. Instead, approximation algorithms are standardly used. For example, the InsideOutside (IO) algorithm efficiently implements the Expectation-Maximization (EM) procedure for approximating a Maximum Likelihood estimator (Lari and Young, 1990). Bayesian estimators for PCFG rule probabilities have also been attracting attention because they provide a theoretically-principled way of incorporating prior information. Kurihara and Sato (2006) proposed a Variational Bayes estimator based on a mean-field approximation, and Johnson et al. (2007) proposed MCMC samplers for the posterior distribution over rule probabilities and the parse trees of the training data strings. PCFGs have the interesting property (which we expect most linguistically more realistic models to also possess) that the distributions they define are not always properly normalized or “tight”. In a non-tight PCFG the partition function (i.e., sum of the “probabilities” of all the trees generated by the PCFG) is less than one. (Booth and Thompson, 1973, called such non-tight PCFGs “inconsistent”, but we follow Chi and Geman (1998) in calling them “non-tight” to avoid confusion with the consistency of statistical estimators). Chi (1999) showed that renormalized nontight PCFGs (which he called “Gibbs CFGs”) define the same class of distributions over trees as do tight PCFGs with the same rules, and provided an algorithm for mapping any PCFG to a tight PCFG with the same rules that defines the same distribution over trees. An obvious question is then: how does tightness affect the inference of PCFGs? Chi and Geman (1998) studied the question for Maximum Likelihood (ML) estimation, and showed that ML estimates are always tight for both the supervised case (where the input consists of parse trees) and the unsupervised case (where the input consists of yields or terminal strings). This means that ML estimators can simply ignore issues of tightness, and rest assured that the PCFGs they estimate are in fact tight. The situation is more subtle with Bayesian estimators. We show that for the special case of linear PCFGs (which include HMMs) with non-degenerate priors the posterior puts zero mass on non-tight PCFGs, so tightness is not an issue with Bayesian estimation of such grammars. However, because all of the commonly used priors (such as the Dirichlet or the logistic normal) assign non-zero probability across the whole probability simplex, in general the posterior may assign non-zero probability to nontight PCFGs. We discuss three different possible approaches to this in this paper: 1. the only-tight approach, where we modify the prior so it only assigns non-zero probability to tight PCFGs, 2. the renormalization approach, where we renormalize non-tight PCFGs so they define a probability distribution over trees, and 3. the sink-element approach, where we reinterpret non-tight PCFGs as assigning non-zero probability to a “sink element”, so both tight and non-tight PCFGs are properly normalized. We show how to modify the Gibbs sampler described by Johnson et al. (2007) so it produces samples from the posterior distributions defined by the only-tight and renormalization approaches. Perhaps surprisingly, we show that Gibbs sampler as defined by Johnson et al. actually produces samples from the posterior distributions defined by the sink-element approach. We conclude by studying the effect of requiring tightness on the estimation of some simple PCFGs. Because the Bayesian posterior converges around the (tight) ML estimate as the size of the data grows, requiring tightness only seems to make a difference with highly biased priors or with very small training corpora. 2 PCFGs and tightness Let G = (T, N, S, R) be a Context-Free Grammar in Chomsky normal form with no useless productions, where T is a finite set of terminal symbols, N is a finite set of nonterminal symbols (disjoint from T), S ∈N is a distinguished nonterminal called the start symbol, and R is a finite set of productions of the form A →B C or A →w, where A, B, C ∈N and w ∈T. In what follows we use β as a variable ranging over (N × N) ∪T. A Probabilistic Context-Free Grammar (G, Θ) is a pair consisting of a context-free grammar G and a real-valued vector Θ of length |R| indexed by productions, where θA→β is the production probability associated with the production A →β ∈R. We require that θA→β ≥0 and that for all nonterminals A ∈N, P A→β∈RA θA→β = 1, where RA is the subset of rules R expanding the nonterminal A. A PCFG (G, Θ) defines a measure µΘ over trees t as follows: µΘ(t) = Y r∈R θfr(t) r where fr(t) is the number of times the production r = A →β ∈R is used in the derivation of t. The partition function Z or measure of all possible trees is: Z(Θ) = X t′∈T Y r∈R θfr(t′) r where T is the set of all (finite) trees generated by G. A PCFG is tight iff the partition function Z(Θ) = 1. In this paper we use Θ⊥to denote the set of rule probability vectors Θ for which G is nontight. Nederhof and Satta (2008) survey several algorithms for computing Z(Θ), and hence for determining whether a PCFG is tight.1 Non-tightness can arise in very simple PCFGs, such as the “Catalan” PCFG S →S S | a. This grammar produces binary trees where all internal 1We found out that finding whether a PCFG is tight by directly inspecting the partition function value is less stable than using the method in Wetherell (1980). For this reason, we used Wetherell’s approach, which is based on finding the principal eigenvalue of the matrix M. nodes are labeled as S and the yield of these trees is a sequence of as. If the probability of the rule S →S S is greater than 0.5 then this PCFG is nontight. Perhaps the most straight-forward way to understand this non-tightness is to view this grammar as defining a branching process where an S can either “reproduce” with probability θS→S S or “die out” with probability θS→a. When θS→S S > θS→a the S nodes reproduce at a faster rate than they die out, so the derivation has a non-zero probability of endlessly rewriting (Atherya and Ney, 1972). 3 Bayesian inference for PCFGs The goal of Bayesian inference for PCFGs is to infer a posterior distribution over the rule probability vectors Θ given observed data D. This posterior distribution is obtained by combining the likelihood P(D | Θ) with a prior distribution P(Θ) over Θ using Bayes Rule. P(Θ | D) ∝P(D | Θ) P(Θ) We now formally define the three approaches to handling non-tightness mentioned earlier: the only-tight approach: we only permit priors where P(Θ⊥) = 0, i.e., we insist that the prior assign zero mass to non-tight rule probability vectors, so Z = 1. This means we can define: P(t | Θ) = µΘ(t) the renormalization approach: we renormalize non-tight PCFGs by dividing by the partition function: P(t | Θ) = 1 Z(Θ) µΘ(t) (1) the sink-element approach: we redefine our probability distribution so its domain is a set T ′ = T ∪{⊥}, where T is the set of (finite) trees generated by G and ⊥̸∈T is a new element that serves as a “sink state” to which the “missing mass” 1 −Z(Θ) is assigned. Then we define:2 P(t | Θ) =  µΘ(t) if t ∈T 1 −Z(Θ) if t = ⊥ With this in hand, we can now define the likelihood term. We consider two types of data D here. In the supervised setting the data D consists of a corpus of parse trees D = (t1, . . . , tn) where each tree ti is generated by the PCFG G, so P(D | Θ) = n Y i=1 P(ti | Θ) In the unsupervised setting the data D consists of a corpus of strings D = (w1, . . . , wn) where each string wi is the yield of one or more trees generated by G. In this setting P(D | Θ) = n Y i=1 P(wi | Θ), where: P(w | Θ) = X t∈T :yield(t)=w P(t | Θ) 4 The special case of linear PCFGs One way to handle the issue of tightness is to identify a family of CFGs for which practically any parameter setting will yield a tight PCFG. This is the focus of this section, in which we identify a subset of CFGs, which are “almost everywhere” tight. This family of CFGs includes many of the CFGs used in NLP applications. We cannot expect that a CFG will yield a tight PCFG for any assignment to the rule probabilities (i.e. that Θ⊥= ∅). Even in simple cases, such as the grammar S →S|a, the assignment of probability 1 to S →S and 0 to the other rule renders the S nonterminal useless, and places all of the probability 2This definition of a distribution over trees can be induced by a tight PCFG with a special ⊥symbol in its vocabulary. Given G, the first step is to create a tight grammar G0 using the renormalization approach. Then, a new start symbol is added to G0, S0, and also rules S0 →S (where S is the old start symbol in G0) and S0 →⊥. The first rule is given probability Z(Θ) and the second rule is given probability 1 −Z(Θ). It can be then readily shown that the new tight PCFG G0 induces a distribution over trees just like in Eq. 3, only with additional S0 on top of all trees. mass on infinite structures of the form S →S → S →. . .. However, we can weaken our requirement so that the cases in which parameter assignment yields a non-tight PCFG are rare, or have measure zero. To put it more formally, we say that a prior P(Θ) is “tight almost everywhere for G” if P(Θ⊥) = Z Θ∈Θ⊥P(Θ) dΘ = 0. We now provide a sufficient condition (linearity) for CFGs under which they are tight almost everywhere with any continuous prior. For a nonterminal A ∈N and β ∈(N ∪T)∗, we use A ⇒k β to denote that A can be re-written using a sequence of rules from R to the sentential form β in k derivation steps. We use A ⇒+ β to denote that there exists a k > 0 such that A ⇒k β. Definition 1 A context-free grammar G is linear if there are no A ∈N such that3 A ⇒+ . . . A . . . A . . . . Let L(A) = {w|A ⇒∗w, w ∈T ∗}. Define G(A) to be the grammar G where S is replaced by A. We assume G has no useless nonterminals, i.e. each nonterminal A participates in some complete tree derivation (but it could potentially have probability 0). Useless nonterminals can always be removed from a grammar without changing the language generated by the grammar. Definition 2 A nonterminal A ∈N in a probabilistic context-free grammar G with parameters Θ is nonterminating if: • A is recursive: there is a β such that A ⇒+ β and A appears in β. • PG(A)(L(A)) = P w∈L(A) PG(A)(w) = 0. Lemma 1 A linear PCFG G with parameters Θ which does not have any nonterminating nonterminals is tight. 3Note that this definition of linear CFGs deviates from the traditional definition, which states that a PCFG is linear if the right handside of each rule includes at most one nonterminal. The traditional definition implies Definition 1. Proof: Our proof relies on the properties of a certain |N| × |N| matrix M where: MAB = X A→β∈RA n(β, B) θA→β where n(β, B) is the number of appearances of the nonterminal B in the sequence β. MAB is the expected number of B nonterminals generated from an A nonterminal in one single derivational step, so [Mk]AB is the expected number of B nonterminals generated from an A nonterminal in a k-step derivation (Wetherell, 1980). Since M is a non-negative matrix, under some regularity conditions, the Frobenius-Perron theorem states that the largest eigenvalue of this matrix (in absolute value) is a real number. Let this eigenvalue be denoted by λ. A PCFG is called “subcritical” if λ < 1 and supercritical if λ > 1. Then, in turn, a PCFG is tight if it is subcritical. It is not tight if it is supercritical. The case of λ = 1 is a borderline case that does not give sufficient information to know whether the PCFG is tight or not. In the Bayesian case, for a continuous prior such as the Dirichlet prior, this borderline case will have measure zero under the prior. Now let A ∈N. Since the grammar is linear, there is no derivation A ⇒+ . . . A . . . A . . .. Therefore, any derivation of the form A ⇒+ . . . A . . . includes A on the right hand-side exactly once. Because the grammar has no nonterminating nonterminals, the probability of such a derivation is strictly smaller than 1. For each A ∈N, define: pA = X β=...A... P(A ⇒|N| β|Θ). Since A is not useless, then pA < 1. Therefore q = maxA pA < 1. Since any derivation of length k of the form A ⇒. . . A . . . can be decomposed to at least k 2|N| cycles that start at a terminal B ∈N and end in the same nonterminal B ∈N, it holds that: [Mk]AA ≤q k 2|N| k→∞ →0. This means that trace(Mk) k→∞ → 0. This means that the eigenvalue of M is strictly smaller than 1 (linear algebra), and therefore the PCFG is tight. ■ Proposition 1 Any continuous prior P(Θ) on a linear grammar G is tight almost everywhere for G. Proof: Let G be a linear grammar. With a continuous prior, the probability of G getting parameters from the prior which yield a useless non-terminal is 0 – it would require setting at least one rule in the grammar with rule probability which is exactly 1. Therefore, with probability 1, the parameters taken from the prior yield a PCFG which is linear and does not have nonterminating nonterminals. According to Lemma 1, this means the PCFG is tight. ■ Deciding whether a grammar G is linear can be done in polynomial time using the construction from Bar-Hillel et al. (1964). We can first eliminate the differences between nonterminals and terminal symbols by adding a rule A →cA for each nonterminal A ∈N, after extending the set of terminal symbols A with {cA|A ∈N}. Let GA be the grammar G with the start symbol being replaced with A. We can then intersect the grammar GA with the regular language T ∗cAT ∗cAT ∗(for each nonterminal A ∈N). If for any nonterminal A the intersection is not the empty set (with respect to the language that the intersection generates), then the grammar is not linear. Checking whether the intersection is the empty set or not can be done in polynomial time. We conclude this section by remarking that many of the models used in computational linguistics are in fact equivalent to linear PCFGs, so continuous Bayesian priors are almost everywhere tight. For example, HMMs and many kinds of “stacked” finitestate machines are equivalent to linear PCFGs, as are the example PCFGs given in Johnson et al. (2007) to motivate the MCMC estimation procedures. 5 Dirichlet priors The first step in Bayesian inference is to specify a prior on Θ. In the rest of this paper we take P(Θ) to be a product of Dirichlet distributions, with one distribution for each non-terminal A ∈N, as this turns out to simplify the computations considerably. The prior is parameterized by a positive real valued vector α indexed by productions R, so each production probability θA→β has a corresponding Dirichlet parameter αA→β. As before, let RA be the set of productions in R with left-hand side A, and let θA and αA refer to the component subvectors of θ and α respectively indexed by productions in RA. The Dirichlet prior P(Θ | α) is: P(Θ | α) = Y A∈N PD(ΘA | αA), where PD(ΘA | αA) = 1 C(αA) Y r∈RA θαr−1 r and C(αA) = Q r∈RA Γ(αr) Γ(P r∈RA αr) where Γ is the generalized factorial function and C(α) is a normalization constant that does not depend on ΘA. Dirichlet priors are useful because they are conjugate to the multinomial distribution, which is the building block of PCFGs. Ignoring issues of tightness for the moment and setting P(t | Θ) = µΘ(t), this means that in the supervised setting the posterior distribution P(Θ | t, α) given a set of parse trees t = (t1, . . . , tn) is also a product of Dirichlets distribution. P(Θ | t, α) ∝P(t | Θ) P(Θ | α) ∝ Y r∈R θfr(t) r ! Y r∈R θαr−1 r ! = Y r∈R θfr(t)+αr−1 r which is a product of Dirichlet distributions with parameters f(t) + α, where f(t) is the vector of rule counts in t indexed by r ∈R. We can thus write: P(Θ | t, α) = P(Θ | f(t) + α) which makes it clear that the rule counts are directly added to the parameters of the prior to produce the parameters of the posterior. 6 Inference in the supervised setting We first discuss Bayesian inference in the supervised setting, as inference in the unsupervised setting is based on inference for the supervised setting. For each of the three approaches to non-tightness we provide an algorithm that characterizes the posterior P(Θ | t), where t = (t1, . . . , tn) is a sequence of trees, by generating samples from that posterior. Our MCMC algorithms for the unsupervised setting build on these samplers for the supervised setting. Input: Grammar G, vector of trees t, vector of hyperparameters α, previous parameters Θ0. Result: A vector of parameters Θ repeat draw θ from products of Dirichlet with hyperparameters α + f(t) until Θ is tight for G; return Θ Algorithm 1: An algorithm for generating samples from P(Θ | t, α) for the only-tight approach. Input: Grammar G, vector of trees t, vector of hyperparameters α, previous rule parameters Θ0. Result: A vector of parameters Θ draw a proposal Θ∗from a product of Dirichlets with parameters α + f(t). draw a uniform number u from [0, 1]. if u < min{1, Z(Θ(i−1))/Z(Θ∗) n} return Θ∗. return Θ0. Algorithm 2: One step of Metropolis-Hastings algorithm for generating samples from P(Θ | t, α) for the renormalization approach. 6.1 The only-tight approach The “only-tight” approach requires that the prior assign zero mass to non-tight rule probability vectors Θ⊥. One way to define such a distribution is to restrict the domain of an existing prior distribution with the set of tight Θ and renormalize. In more detail, if P(Θ) is a prior over rule probabilities, then its renormalization is the prior P′ defined as: P′(Θ) = P(Θ)I(Θ /∈Θ⊥) Z(Θ⊥) . (2) where Z(Θ⊥) = R Θ P(Θ)I(Θ /∈Θ⊥)dΘ. Perhaps surprisingly, it turns out that if P(Θ) belongs to a family of conjugate priors, then P′(Θ) also belongs to a (different) family of conjugate priors as well. Proposition 2 Let P(Θ|α) be a prior with hyperparameters α over the parameters of G such that P is conjugate to the grammar likelihood. Then P′, defined in Eq. 2, is conjugate to the grammar likelihood as well. Proof: Assume that trees t are observed, and the Input: Grammar G, vector of trees t, vector of hyperparameters α, previous parameters Θ0. Result: A vector of parameters Θ draw Θ from products of Dirichlet with hyperparameters α + f(t) return Θ Algorithm 3: An algorithm for generating samples from P(Θ | t, α) for the sink-state approach. prior over the grammar parameters is the prior defined in Eq. 2. Therefore, the posterior is: P(Θ|t, α) ∝P′(Θ|α)p(t|Θ) = P(Θ|α)p(t|Θ)I(Θ /∈Θ⊥) Z(Θ⊥) ∝P(Θ|t, α)I(Θ /∈Θ⊥) Z(Θ⊥) . Since P(Θ|α) is a conjugate prior to the PCFG likelihood, then there exists α′ = α′(t) such that P(Θ|t, α) = P′(Θ|α′). Therefore: P(Θ|t, α) ∝P(Θ|α′)I(Θ /∈Θ⊥) Z(Θ⊥) . which exactly equals P′(Θ|α′). ■ Sampling from the posterior over the parameters given a set of trees t is therefore quite simple when assuming the base prior being renormalized is a product of Dirichlets. Algorithm 1 samples from a product of Dirichlets distribution with hyperparameters α + f(t) repeatedly, each time checking and rejecting the sample until we obtain a tight PCFG. The more mass the Dirichlet distribution with hyperparameters α + f(t) puts on non-tight PCFGs, the more rejections will happen. In general, if the probability mass on non-tight PCFGs is q⊥, then it would require, on average 1/(1 −q⊥) samples from this distribution in order to obtain a tight PCFG. 6.2 The renormalization approach The renormalization approach modifies the likelihood function instead of the prior. Here we use a product of Dirichlets prior P(Θ | α) on rule probability vectors Θ, but the presence of the partition function Z(Θ) in Eq. 1 means that the likelihood is no longer conjugate to the prior. Instead we have: P(Θ | t) = n Y i=1 µΘ(ti) Z(Θ) P(Θ | α) ∝ 1 Z(Θ)n P(Θ | α + f(t)). (3) Note that the factor Z(Θ) depends on Θ, and therefore cannot be absorbed into the constant. Algorithm 2 describes a Metropolis-Hastings sampler for sampling from the posterior in Eq. 3 that uses a product of Dirichlets with parameters α + f(t) as a proposal distribution. In our experiments, we use the algorithm from Nederhof and Satta (2008) to compute the partition function which is needed in Algorithm 2. 6.3 The “sink element” approach The “sink element” approach does not affect the likelihood (since the probability of a tree t is just the product of the probabilities of the rules used to generate it), nor does it require a change to the prior. (The sink element ⊥is not a member of the set of trees T , so it cannot appear in the data t). This means that the conjugacy argument given at the bottom of section 5 holds in this approach, so the posterior P(Θ | t, α) is a product of Dirichlets with parameters f(t) + α. Algorithm 3 gives a sampler for P(Θ | t, α) for the sink element approach. 7 Inference in the unsupervised setting Johnson et al. (2007) provide two Markov chain Monte Carlo algorithms for Bayesian inference for PCFG rule probabilities in the unsupervised setting (i.e., where the data consists of a corpus of strings w = (w1, . . . , wn) alone). The algorithms we give here are based on their Gibbs sampler, which in each iteration first samples parse trees t = (t1, . . . , tn), where each ti is a parse for wi, from P(t | w, Θ), and then samples Θ from P(Θ | t, α). Notice that the conditional distribution P(t | w, Θ) is unaffected in each of our three approaches (the partition functions cancel in the renormalization approach), so the algorithm for sampling from P(t | w, Θ) given by Johnson et al. applies in each of our three approaches as well. Johnson et al. ignored tightness and assumed that P(Θ | t, α) is a product of Dirichlets with parameInput: Grammar G, vector of hyperparameters α, vector of strings w = (w1, . . . , wn), previous rule parameters Θ0. Result: A vector of parameters Θ for i ←1 to n do draw ti from P(ti|wi, Θ0) end use Algorithm 2 to sample Θ given G, t, α and Θ0 return Θ Algorithm 4: One step of the Metropolis-withinGibbs sampler for the renormalization approach. ters f(t) + α. As we noted in section 6.3, this assumption holds for the sink-state approach to nontightness, so their sampler is in fact correct for the sink-state approach. In fact, we obtain samplers for the unsupervised setting for each of our approaches by “plugging in” the corresponding sampling algorithm (Eq. 1–3) for P(Θ | t, α) into the generic Gibbs sampler framework of Johnson et al. The one complication is that because we use a Metropolis-Hastings procedure to generate samples from P(Θ | t, α) in the renormalization approach, we use the Metropolis-within-Gibbs procedure given in Algorithm 4 (Robert and Casella, 2004). 8 The expressive power of the three approaches Probably the most important question to ask with respect to the three different approaches to nontightness is whether they differ in terms of expressive power. Clearly the three approaches differ in terms of the grammars they admit (the only-tight approach requires the prior to only assign non-zero probability to tight PCFGs, while the other two approaches permit the prior to assign non-zero probability to non-tight PCFGs as well). However, if we regard a grammar as merely a device for defining a distribution over trees and a prior as defining a distribution over distributions over trees, it is reasonable to ask whether the class of distributions over distributions of trees that each of these approaches define are the same or differ. We believe, but have not proved, that all three approaches define the same class of distributions over distributions of trees in the following sense: any prior used with one of the approaches can be transformed into a different prior that can be used with one of the other approaches, and yield identical posterior over trees conditioned on a string, marginalizing out the parameters. This does not mean that the three approaches are equivalent, however. In this section we provide a grammar such that with a uniform prior over rule probabilities, the conditional distribution over trees given a fixed string varies under each of the three different approaches. The grammar we consider has three rules S → S S S|S S|a with probabilities θ1, θ2 and 1 −θ1 − θ2, respectively. The Θ parameters are required to satisfy θ1 + θ2 ≤1 and θi ≥0 for i = 1, 2. We compute the posterior distribution over parse trees for the string w = a a a. The grammar generates three parse trees for w1, namely: t1 = S S a S a S a t2 = S S a S S a S a t3 = S S S a S a S a The partition function Z for this grammar is the smallest positive root of the cubic equation: Z = θ1Z3 + θ2Z2 + (1 −θ1 −θ2) We used Mathematica to find an analytic solution for Z in this equation, obtaining not only an expression for the partition function Z(Θ) but also identifying the non-tight region Θ⊥. In order to compute P(t1|w), we used Mathematica to first compute the following quantities: qsinkElement(ti) = Z Θ µΘ(ti) dΘ qtightOnly(ti) = Z Θ µΘ(ti) I(Θ /∈Θ⊥) dΘ qtightOnly(ti) = Z Θ µΘ(ti)/Z(Θ) dΘ where i ∈{1, 2, 3}. We used Mathematica to analytically compute q(ti) for each approach and each i ∈{1, 2, 3}. Then it’s easy to show that: 0 10 20 30 0.35 0.40 0.45 0.50 0.55 Average f−score Density Inference only−tight sink−state renormalise Figure 1: The density of the F1-scores with the three approaches. The prior used is a symmetric Dirichlet with α = 0.1. P(ti | w) = q(ti) P3 i′=1 q(ti′) where the q used is based on the approach to tightness desired. For the sink-element approach, P(t1|w) = 7 11 ≈ 0.636364. For the onlytight approach P(t1|w) = 11179 17221 ≈ 0.649149. For the renormalization approach the analytic expression is too complex to include in this paper, but it approximately equals 0.619893. A log of our Mathematica calculations is available at http://www.cs.columbia.edu/˜scohen/ acl13tightness-mathematica.pdf, and we confirmed these results to three decimal places using the samplers described above (which required 107 samples per approach). While the differences between these conditional probabilites are not great, the conditional probabilities are clearly different, so the three approaches do in fact define different distributions over trees under a uniform prior on rule probabilities. 9 Empirical effects of the three approaches in unsupervised grammar induction In this section we present experiments using the three samplers just described in an unsupervised grammar induction problem. Our goal here is not to improve the state-of-the-art in unsupervised grammar induction, but to try to measure empirical differences in the estimates produced by the three different approaches to tightness just described. The bottom line of our experiments is that we could not detect any significant difference in the estimates produced by samplers for these three different approaches. In our experiments we used the English Penn treebank (Marcus et al., 1993). We use the part-ofspeech tag sequences of sentences shorter than 11 words in sections 2–21. The grammar we use is the PCFG version of the dependency model with valence (Klein and Manning, 2004), as it appears in Smith (2006). We used a symmetric Dirichlet prior with hyperparameter α = 0.1. For each of the three approaches for handling tightness, we ran 100 times the samplers in §7, each for 1,000 iterations. We discarded the first 900 sweeps of each run, and calculated the F1-scores of the sampled trees every 10th sweep from the last 100 sweeps. For each run we calculated the average F1-score over the 10 sweeps we evaluated. We thus have 100 average F1-scores for each of the samplers. Figure 1 plots the density of F1 scores (compared to the gold standard) resulting from the Gibbs sampler, using all three approaches. The mean value for each of the approaches is 0.41 with standard deviation 0.06 (only-tight), 0.41 with standard deviation 0.05 (renormalization) and 0.42 with standard deviation 0.06 (sink element). In addition, the only-tight approach results in an average of 437 (s.d., 142) rejected proposals in 1,000 samples, while the renormalization approach results in an average of 232 (s.d., 114) rejected proposals in 1,000 samples. (It’s not surprising that the only-tight approach results in more rejections as it keeps proposing new Θ until a tight proposal is found, while the renormalization approach simply uses the old Θ). We performed two-sample Kolmogorov-Smirnov tests (which are non-parametric tests designed to determine if two distributions are different; see DeGroot, 1991) on each of the three pairs of 100 F1scores. None of the tests were close to significant; the p-values were all above 0.5. Thus our experiments provided no evidence that the samplers produced different distributions over trees, although it’s reasonable to expect that these distributions do indeed differ. In terms of running time, our implementation of the renormalization approach was several times slower than our implementations of the other two approaches because we used the naive fixed-point algorithm to compute the partition function: perhaps this could be improved using one of the more sophisticated partition function algorithms described in Nederhof and Satta (2008). 10 Conclusion In this paper we characterized the notion of an almost everywhere tight grammar in the Bayesian setting and showed it holds for linear CFGs. For non-linear CFGs, we described three different approaches to handle non-tightness. The “only-tight” approach restricts attention to tight PCFGs, and perhaps surprisingly, we showed that conjugacy still obtains when the domain of a product of Dirichlets prior is restricted to the subset of tight grammars. The renormalization approach involves renormalizing the PCFG measure µ over trees when the grammar is non-tight, which destroys conjugacy with a product of Dirichlets prior. Perhaps most surprisingly of all, the sink-element approach, which assigns the missing mass in non-tight PCFG to a sink element ⊥, turns out to be equivalent to existing practice where tightness is ignored. We studied the posterior distributions over trees induced by the three approaches under a uniform prior for a simple grammar and showed that they differ. We leave for future work the important question of whether the classes of distributions over distributions over trees that the three approaches define are the same or different. We described samplers for the supervised and unsupervised settings for each of these approaches, and applied them to an unsupervised grammar induction problem. (The code for the unsupervised samplers is available from http://web.science.mq.edu. au/˜mjohnson). We could not detect any difference in the posterior distributions over trees produced by these samplers, despite devoting considerable computational resources to the problem. This suggests that for these kinds of problems at least, tightness is not of practical concern for Bayesian inference of PCFGs. Acknowledgements We thank the anonymous reviewers and Giorgio Satta for their valuable comments. Shay Cohen was supported by the National Science Foundation under Grant #1136996 to the Computing Research Association for the CIFellows Project, and Mark Johnson was supported by the Australian Research Council’s Discovery Projects funding scheme (project numbers DP110102506 and DP110102593). References K. B. Atherya and P. E. Ney. 1972. Branching Processes. Dover Publications. Y. Bar-Hillel, M. Perles, and E. Shamir. 1964. On formal properties of simple phrase structure grammars. Language and Information: Selected Essays on Their Theory and Application, pages 116–150. T. L. Booth and R. A. Thompson. 1973. Applying probability measures to abstract languages. IEEE Transactions on Computers, C-22:442–450. Z. Chi and S. Geman. 1998. Estimation of probabilistic context-free grammars. Computational Linguistics, 24(2):299–305. Z. Chi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131–160. S. B. Cohen and N. A. Smith. 2012. Empirical risk minimization for probabilistic grammars: Sample complexity and hardness of learning. Computational Linguistics, 38(3):479–526. M. H. DeGroot. 1991. Probability and Statistics (3rd edition). Addison-Wesley. M. Johnson, T. L. Griffiths, and S. Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Proceedings of NAACL. D. Klein and C. D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of ACL. K. Kurihara and T. Sato. 2006. Variational Bayesian grammar induction for natural language. In 8th International Colloquium on Grammatical Inference. K. Lari and S.J. Young. 1990. The estimation of Stochastic Context-Free Grammars using the Inside-Outside algorithm. Computer Speech and Language, 4(35-56). M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics, 19:313–330. M.-J. Nederhof and G. Satta. 2008. Computing partition functions of PCFGs. Research on Language and Computation, 6(2):139–162. C. P. Robert and G. Casella. 2004. Monte Carlo Statistical Methods. Springer-Verlag New York. N. A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text. Ph.D. thesis, Johns Hopkins University. C. S. Wetherell. 1980. Probabilistic languages: A review and some open questions. Computing Surveys, 12:361–379.
2013
102
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1042–1051, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Integrating Multiple Dependency Corpora for Inducing Wide-coverage Japanese CCG Resources Sumire Uematsu† [email protected] Takuya Matsuzaki‡ [email protected] Hiroki Hanaoka† [email protected] Yusuke Miyao‡ [email protected] Hideki Mima† [email protected] †The University of Tokyo Hongo 7-3-1, Bunkyo, Tokyo, Japan ‡National Institute of Infomatics Hitotsubashi 2-1-2, Chiyoda, Tokyo, Japan Abstract This paper describes a method of inducing wide-coverage CCG resources for Japanese. While deep parsers with corpusinduced grammars have been emerging for some languages, those for Japanese have not been widely studied, mainly because most Japanese syntactic resources are dependency-based. Our method first integrates multiple dependency-based corpora into phrase structure trees and then converts the trees into CCG derivations. The method is empirically evaluated in terms of the coverage of the obtained lexicon and the accuracy of parsing. 1 Introduction Syntactic parsing for Japanese has been dominated by a dependency-based pipeline in which chunk-based dependency parsing is applied and then semantic role labeling is performed on the dependencies (Sasano and Kurohashi, 2011; Kawahara and Kurohashi, 2011; Kudo and Matsumoto, 2002; Iida and Poesio, 2011; Hayashibe et al., 2011). This dominance is mainly because chunkbased dependency analysis looks most appropriate for Japanese syntax due to its morphosyntactic typology, which includes agglutination and scrambling (Bekki, 2010). However, it is also true that this type of analysis has prevented us from deeper syntactic analysis such as deep parsing (Clark and Curran, 2007) and logical inference (Bos et al., 2004; Bos, 2007), both of which have been surpassing shallow parsing-based approaches in languages like English. In this paper, we present our work on inducing wide-coverage Japanese resources based on combinatory categorial grammar (CCG) (Steedman, 2001). Our work is basically an extension of a seminal work on CCGbank (Hockenmaier and Steedman, 2007), in which the phrase structure trees of the Penn Treebank (PTB) (Marcus et al., 1993) are converted into CCG derivations and a wide-coverage CCG lexicon is then extracted from these derivations. As CCGbank has enabled a variety of outstanding works on wide-coverage deep parsing for English, our resources are expected to significantly contribute to Japanese deep parsing. The application of the CCGbank method to Japanese is not trivial, as resources like PTB are not available in Japanese. The widely used resources for parsing research are the Kyoto corpus (Kawahara et al., 2002) and the NAIST text corpus (Iida et al., 2007), both of which are based on the dependency structures of chunks. Moreover, the relation between chunk-based dependency structures and CCG derivations is not obvious. In this work, we propose a method to integrate multiple dependency-based corpora into phrase structure trees augmented with predicate argument relations. We can then convert the phrase structure trees into CCG derivations. In the following, we describe the details of the integration method as well as Japanese-specific issues in the conversion into CCG derivations. The method is empirically evaluated in terms of the quality of the corpus conversion, the coverage of the obtained lexicon, and the accuracy of parsing with the obtained grammar. Additionally, we discuss problems that remain in Japanese resources from the viewpoint of developing CCG derivations. There are three primary contributions of this paper: 1) we show the first comprehensive results for Japanese CCG parsing, 2) we present a methodology for integrating multiple dependency-based re1042 I NP: I′ NPy: I′ NP: I′ NP: I′ give S\NP/NP/NP : λxλyλz.give′yxz them NP :them′ NP :ythem′ > S\NP/NP :λyλz.give′y them′z money NP :money′ NP :money′ NP :money′ > S\NP :λz.give′money′them′z < S :give′money′them′I′ 大使 ambassador NPnc が NOM NPga\NPnc < NPga NPga NPga 交渉 negotiation NPnc に DAT NPni\NPnc < NPni NPni 参加 participation Sstem\NPga\NPni し do-CONT Scont\Sstem <B Scont\NPga\NPni た PAST-BASE Sbase\Scont Sbase\Scont <B Sbase\NPga\NPni < Sbase\NPga < Sbase 政府 government NPnc が NOM NPga\NPnc < NPga NPga NPga 大使 ambassador NPnc を ACC NPwo\NPnc < NPwo NPwo 交渉 negotiation NPnc に DAT NPni\NPnc < NPni 参加さ participation Svo s\NPga\NPni せ CAUSE Scont\NPga\NPwo\(Svo s\NPga) < Scont\NPga\NPwo\NPni < Scont\NPga\NPwo < Scont\NPga < Scont 政府 government NP は NOM NPga\NP < NPga NPga NPga 大使を ambassador-ACC NPwo NPga NPga 交渉に negotiation-DAT NPni NPga 参加さ join Svo s\NPga\NPni せ cause Scont\NPga\NPwo\(Svo s\NPga) < Scont\NPga\NPwo\NPni < Scont\NPga\NPwo < Scont\NPga < Scont 交渉 negotiation NPnc に DAT NPni\NPnc < NPni NPga NPga 参加 participation Sstem\NPni さ do Svo s\Sstem <B Svo s\NPni せ CAUSE Scont\Svo s Scont\Svo s <B Scont\NPni た PAST Sbase\Scont Scont\Svo s Scont <B Sbase\NPni < Sbase 1 Figure 1: A CCG derivation. X/Y : f Y : a → X : fa (>) Y : a X\Y : a → X : fa (<) X/Y : f Y/Z : g → X/Z : λx.f(gx) (> B) Y\Z : g X\Y : f → X\Z : λx.f(gx) (< B) Figure 2: Combinatory rules (used in the current implementation). sources to induce CCG derivations, and 3) we investigate the possibility of further improving CCG analysis by additional resources. 2 Background 2.1 Combinatory Categorial Grammar CCG is a syntactic theory widely accepted in the NLP field. A grammar based on CCG theory consists of categories, which represent syntactic categories of words and phrases, and combinatory rules, which are rules to combine the categories. Categories are either ground categories like S and NP or complex categories in the form of X/Y or X\Y , where X and Y are the categories. Category X/Y intuitively means that it becomes category X when it is combined with another category Y to its right, and X\Y means it takes a category Y to its left. Categories are combined by applying combinatory rules (Fig. 2) to form categories for larger phrases. Figure 1 shows a CCG analysis of a simple English sentence, which is called a derivation. The verb give is assigned category S\NP/NP/NP, which indicates that it takes two NPs to its right, one NP to its left, and finally becomes S. Starting from lexical categories assigned to words, we can obtain categories for phrases by applying the rules recursively. An important property of CCG is a clear interface between syntax and semantics. As shown in Fig. 1, each category is associated with a lambda term of semantic representations, and each combinatory rule is associated with rules for semantic composition. Since these rules are universal, we can obtain different semantic representations by switching the semantic representations of lexical categories. This means that we can plug in a variSentence S Verb S\$ (e.g. S\NPga) Noun phrase NP Post particle NPga|o|ni|to\NP Auxiliary verb S\S Table 1: Typical categories for Japanese syntax. Cat. Feature Value Interpretation NP case ga nominal o accusative ni dative to comitative, complementizer, etc. nc none S form stem stem base base neg imperfect or negative cont continuative vo s causative Table 2: Features for Japanese syntax (those used in the examples in this paper). ety of semantic theories with CCG-based syntactic parsing (Bos et al., 2004). 2.2 CCG-based syntactic theory for Japanese Bekki (2010) proposed a comprehensive theory for Japanese syntax based on CCG. While the theory is based on Steedman (2001), it provides concrete explanations for a variety of constructions of Japanese, such as agglutination, scrambling, longdistance dependencies, etc. (Fig. 3). The ground categories in his theory are S, NP, and CONJ (for conjunctions). Table 1 presents typical lexical categories. While most of them are obvious from the theory of CCG, categories for auxiliary verbs require an explanation. In Japanese, auxiliary verbs are extensively used to express various semantic information, such as tense and modality. They agglutinate to the main verb in a sequential order. This is explained in Bekki’s theory by the category S\S combined with a main verb via the function composition rule (<B). Syntactic features are assigned to categories NP and S (Table 2). The feature case represents a syntactic case of a noun phrase. The feature form denotes an inflection form, and is necessary for constraining the grammaticality of agglutination. Our implementation of the grammar basically follows Bekki (2010)’s theory. However, as a first step in implementing a wide-coverage Japanese parser, we focused on the frequent syntactic constructions that are necessary for computing predicate argument relations, including agglutination, inflection, scrambling, case alternation, etc. Other details of the theory are largely simplified (Fig. 3), 1043 NP: I′ NPy: I′ NP: I′ NP: I′ S\NP/NP/NP : λxλyλz.give′yxz them NP :them′ NP :ythem′ > S\NP/NP :λyλz.give′y them′z NP :money′ NP :money′ NP :money′ > S\NP :λz.give′money′them′z < S :give′money′them′I′ 大使 ambassador NPnc が NOM NPga\NPnc < NPga NPga NPga 交渉 negotiation NPnc に DAT NPni\NPnc < NPni NPni 参加 participation Sstem\NPga\NPni し do-CONT Scont\Sstem <B Scont\NPga\NPni た PAST-BASE Sbase\Scont Sbase\Scont <B Sbase\NPga\NPni < Sbase\NPga < Sbase 政府 government NPnc が NOM NPga\NPnc < NPga NPga NPga 大使 ambassador NPnc を ACC NPwo\NPnc < NPwo NPwo 交渉 negotiation NPnc に DAT NPni\NPnc < NPni 参加さ participation Svo s\NPga\NPni せ CAUSE Scont\NPga\NPwo\(Svo s\NPga) Scont\NPga\NPwo\NPni < Scont\NPga\NPwo < Scont\NPga < Scont 政府 government NP は NOM NPga\NP < NPga NPga NPga 大使を ambassador-ACC NPwo NPga NPga 交渉に negotiation-DAT NPni NPga 参加さ join Svo s\NPga\NPni せ cause Scont\NPga\NPwo\(Svo s\NPga) < Scont\NPga\NPwo\NPni < Scont\NPga\NPwo < Scont\NPga < Scont 交渉 negotiation NPnc に DAT NPni\NPnc < NPni NPga NPga 参加 participation Sstem\NPni さ do Svo s\Sstem <B Svo s\NPni せ CAUSE Scont\Svo s Scont\Svo s <B Scont\NPni た PAST Sbase\Scont Scont\Svo s Scont <B Sbase\NPni < Sbase 1 Figure 3: A simplified CCG analysis of the sentence “The ambassador participated in the negotiation.”. S →NP/NP (RelExt) S\NP1 →NP1/NP1 (RelIn) S →S1/S1 (Con) S\$1\NP1 →(S1\$1\NP1)/(S1\$1\NP1) (ConCoord) Figure 4: Type changing rules. The upper two are for relative clauses and the others for continuous clauses. coordination and semantic representation in particular. The current implementation recognizes coordinated verbs in continuous clauses (e.g., “彼 はピアノを弾いて歌った/he played the piano and sang”), but the treatment of other types of coordination is largely simplified. For semantic representation, we define predicate argument structures (PASs) rather than the theory’s formal representation based on dynamic logic. Sophisticating our semantic representation is left for future work. For parsing efficiency, we modified the treatment of some constructions so that empty elements are excluded from the implementation. First, we define type changing rules to produce relative and continuous clauses (shown in Fig. 4). The rules produce almost the same results as the theory’s treatment, but without using empty elements (pro, etc.). We also used lexical rules to treat pro-drop and scrambling. For the sentence in Fig. 3, the deletion of the nominal phrase (大使 が), the dative phrase (交渉に), or both results in valid sentences, and shuffling the two phrases does so as well. Lexical entries with the scrambled or dropped arguments are produced by lexical rules in our implementation. 2.3 Linguistic resources for Japanese parsing As described in Sec. 1, dependency-based analysis has been accepted for Japanese syntax. Research on Japanese parsing also relies on dependencybased corpora. Among them, we used the following resources in this work. Kyoto corpus A news text corpus annotated with morphological information, chunk boundKyoto Corpus Chunk 政府 が government NOM 大使 を ambassador ACC 交渉 に negotiation DAT 参加 さ せ た participation do cause PAST NAIST Corpus Dep. Causer ARG-ga ARG-ni Figure 5: The Kyoto and NAIST annotations for “The government had the ambassador participate in the negotiation.”. Accusatives are labeled as ARG-ga in causative (see Sec. 3.2). aries, and dependency relations among chunks (Fig. 5). The dependencies are classified into four types: Para (coordination), A (apposition), I (argument cluster), and Dep (default). Most of the dependencies are annotated as Dep. NAIST text corpus A corpus annotated with anaphora and coreference relations. The same set as the Kyoto corpus is annotated.1 The corpus only focuses on three cases: “ga” (subject), “o” (direct object), and “ni” (indirect object) (Fig. 5). Japanese particle corpus (JP) (Hanaoka et al., 2010) A corpus annotated with distinct grammatical functions of the Japanese particle (postposition) “to”. In Japanese, “to” has many functions, including a complementizer (similar to “that”), a subordinate conjunction (similar to “then”), a coordination conjunction (similar to “and”), and a case marker (similar to “with”). 2.4 Related work Research on Japanese deep parsing is fairly limited. Formal theories of Japanese syntax were presented by Gunji (1987) based on Head-driven Phrase Structure Grammar (HPSG) (Sag et al., 2003) and by Komagata (1999) based on CCG, although their implementations in real-world parsing have not been very successful. JACY (Siegel 1In fact, the NAIST text corpus includes additional texts, but in this work we only use the news text section. 1044 and Bender, 2002) is a large-scale Japanese grammar based on HPSG, but its semantics is tightly embedded in the grammar and it is not as easy to systematically switch them as it is in CCG. Yoshida (2005) proposed methods for extracting a wide-coverage lexicon based on HPSG from a phrase structure treebank of Japanese. We largely extended their work by exploiting the standard chunk-based Japanese corpora and demonstrated the first results for Japanese deep parsing with grammar induced from large corpora. Corpus-based acquisition of wide-coverage CCG resources has enjoyed great success for English (Hockenmaier and Steedman, 2007). In that method, PTB was converted into CCG-based derivations from which a wide-coverage CCG lexicon was extracted. CCGbank has been used for the development of wide-coverage CCG parsers (Clark and Curran, 2007). The same methodology has been applied to German (Hockenmaier, 2006), Italian (Bos et al., 2009), and Turkish (C¸ akıcı, 2005). Their treebanks are annotated with dependencies of words, the conversion of which into phrase structures is not a big concern. A notable contribution of the present work is a method for inducing CCG grammars from chunk-based dependency structures, which is not obvious, as we discuss later in this paper. CCG parsing provides not only predicate argument relations but also CCG derivations, which can be used for various semantic processing tasks (Bos et al., 2004; Bos, 2007). Our work constitutes a starting point for such deep linguistic processing for languages like Japanese. 3 Corpus integration and conversion For wide-coverage CCG parsing, we need a) a wide-coverage CCG lexicon, b) combinatory rules, c) training data for parse disambiguation, and d) a parser (e.g., a CKY parser). Since d) is grammar- and language-independent, all we have to develop for a new language is a)–c). As we have adopted the method of CCGbank, which relies on a source treebank to be converted into CCG derivations, a critical issue to address is the absence of a Japanese counterpart to PTB. We only have chunk-based dependency corpora, and their relationship to CCG analysis is not clear. Our solution is to first integrate multiple dependency-based resources and convert them into a phrase structure treebank that is independent ProperNoun エリツィン Yeltsin NP ProperNoun ロシア Russia Noun 大統領 president PostP に DAT PP NP Aux なかっ not VP Verb 許さ forgive VerbSuffix れ PASSIVE VP Aux た PAST VP “to Russian president Yeltsin” “(one) was not forgiven” Figure 6: Internal structures of a nominal chunk (left) and a verbal chunk (right). of CCG analysis (Step 1). Next, we translate the treebank into CCG derivations (Step 2). The idea of Step 2 is similar to what has been done with the English CCGbank, but obviously we have to address language-specific issues. 3.1 Dependencies to phrase structure trees We first integrate and convert available Japanese corpora―namely, the Kyoto corpus, the NAIST text corpus, and the JP corpus ―into a phrase structure treebank, which is similar in spirit to PTB. Our approach is to convert the dependency structures of the Kyoto corpus into phrase structures and then augment them with syntactic/semantic roles from the other two corpora. The conversion involves two steps: 1) recognizing the chunk-internal structures, and (2) converting inter-chunk dependencies into phrase structures. For 1), we don’t have any explicit information in the Kyoto corpus although, in principle, each chunk has internal structures (Vadas and Curran, 2007; Yamada et al., 2010). The lack of a chunk-internal structure makes the dependencyto-constituency conversion more complex than a similar procedure by Bos et al. (2009) that converts an Italian dependency treebank into constituency trees since their dependency trees are annotated down to the level of each word. For the current implementation, we abandon the idea of identifying exact structures and instead basically rely on the following generic rules (Fig. 6): Nominal chunks Compound nouns are first formed as a right-branching phrase and post-positions are then attached to it. Verbal chunks Verbal chunks are analyzed as left-branching structures. The rules amount to assume that all but the last word in a compound noun modify the head noun (i.e., the last word) and that a verbal chunk is typically in a form V A1 . . . An, where V is a verb 1045 PP Noun 誕生 birth PostPcm から from PP Noun 死 death PostPcm まで to PostPadnom の adnominal PP PP Noun 過程 process PostPcm を ACC PP NP PP Noun 誕生 birth PostPcm から from PP Noun 死 death PostPcm まで to PostPadnom の adnominal PP PP Noun 過程 process PostPcm を ACC Para Dep “A process from birth to death” Figure 7: From inter-chunk dependencies to a tree. (or other predicative word) and Ais are auxiliaries (see Fig. 6). We chose the left-branching structure as default for verbal chunks because the semantic scopes of the auxiliaries are generally in that order (i.e., A1 has the narrowest scope). For both cases, phrase symbols are percolated upward from the right-most daughters of the branches (except for a few cases like punctuation) because in almost all cases the syntactic head of a Japanese phrase is the right-most element. In practice, we have found several patterns of exceptions for the above rules. We implemented exceptional patterns as a small CFG and determined the chunk-internal structures by deterministic parsing with the generic rules and the CFG. For example, two of the rules we came up with are rule A: Number →PrefixOfNumber Number rule B: ClassifierPhrase →Number Classifier in the precedence: rule A > B > generic rules. Using the above, we bracket a compound noun 約 千 人 死亡 approximately thousand people death PrefixOfNumber Number Classifier CommonNoun “death of approximately one thousand people” as in (((約 千) 人) 死亡) (((approximately thousand) people) death) We can improve chunk-internal structures to some extent by refining the CFG rules. A complete solution like the manual annotation by Vadas and Curran (2007) is left for future work. The conversion of inter-chunk dependencies into phrase structures may sound trivial, but it is not necessarily easy when combined with chunkinternal structures. The problem is to which node in the internal structure of the head the dependent dep modifier-type precedence Para から/PostPcm まで/PostPcm, */(Verb|Aux), ... Dep */PostPcm */(Verb|Aux), */Noun, ... Dep */PostPadnom */Noun, */(Verb|Aux), ... Table 3: Rules to determine adjoin position. PP Noun 犬 dog PostP に DAT VP NP Adj 白い white NP VP Noun 猫 cat Verb 言っ say Aux た PAST VP PP Verb 行け go! PostP と CMP ARG-to ARG-ni ARG-ga ARG-ga ARG-ga ARG-ga ARG-ni ARG-CLS NAIST JP Figure 8: Overlay of pred-arg structure annotation (“The white cat who said “Go!” to the dog.”). tree is adjoined (Fig. 7). In the case shown in the figure, three chunks are in the dependency relation indicated by arrows on the top. The dotted arrows show the nodes to which the subtrees are adjoined. Without any human-created resources, we cannot always determine the adjoin positions correctly. Therefore, as a compromise, we wound up implementing approximate heuristic rules to determine the adjoin positions. Table 3 shows examples of such rules. A rule specifies a precedence of the possible adjoin nodes as an ordered list of patterns on the lexical head of the subtree under an adjoin position. The precedence is defined for each combination of the type of the dependent phrase, which is determined by its lexical head, and the dependency type in the Kyoto corpus. To select the adjoin position for the left-most subtree in Fig. 7, for instance, we look up the rule table using the dependency type, “Para”, and the lexical head of the modifier subtree, “ から /PostPcm”, as the key, and find the precedence “ ま で/PostPcm, */(Verb|Aux), ...”. We thus select the PP-node on the middle subtree indicated by the dotted arrow because its lexical head (the rightmost word), “ まで/PostPcm”, matches the first pattern in the precedence list. In general, we seek for an adjoin node for each pattern p in the precedence list, until we find a first match. The semantic annotation given in the NAIST corpus and the JP corpus is overlaid on the phrase structure trees with slight modifications (Fig. 8). 1046 PP Noun 交渉 negotiation PostPcm に DAT VP Noun 参加 participation Verb さ do VerbSuffix せ CAUSE Aux た PAST VP VP S NPni NP 交渉 negotiation T1 に DAT T4 T5 参加 participation S\S さ do S\S せ CAUSE S\S た PAST T3 T2 S < < < or <B < or <B < or <B NPni NPnc 交渉 negotiation NPni\NPnc に DAT Svo_s\NPni Svo_s\NPni 参加 participation Svo_s\Svo_s さ do Scont\Svo_s せ CAUSE Sbase\Scont た PAST Scont\NPni Sbase\NPni Sbase Step 2-1 Step 2-2, 2-3 Figure 9: A phrase structure into a CCG derivation. In the figure, the annotation given in the two corpora is shown inside the dotted box at the bottom. We converted the predicate-argument annotations given as labeled word-to-word dependencies into the relations between the predicate words and their argument phrases. The results are thus similar to the annotation style of PropBank (Palmer et al., 2005). In the NAIST corpus, each pred-arg relation is labeled with the argument-type (ga/o/ni) and a flag indicating that the relation is mediated by either a syntactic dependency or a zero anaphora. For a relation of a predicate wp and its argument wa in the NAIST corpus, the boundary of the argument phrase is determined as follows: 1. If wa precedes wp and the relation is mediated by a syntactic dep., select the maximum PP that is formed by attaching one or more postpositions to the NP headed by wa. 2. If wp precedes wa or the relation is mediated by a zero anaphora, select the maximum NP headed by wa that does not include wp. In the figure, “犬/dog に/DAT” is marked as the niargument of the predicate “言っ/say” (Case 1), and “白い/white 猫/cat” is marked as its ga-argument (Case 2). Case 1 is for the most basic construction, where an argument PP precedes its predicate. Case VP 友達 に friend-DAT PP VP 会う meet-BASE NPni < VP 10時 に 10 o’clock-TIME PP VP 会う meet-BASE T/T > X S 友達 に friend-DAT NPni S\NPni 会う meet-BASE S 10時 に 10 o’clock-TIME S\S S 会う meet-BASE “(to) meet at ten” “(to) meet a friend” Figure 10: An argument post particle phrase (PP) (upper) and an adjunct PP (lower). 2 covers the relative clause construction, where a relative clause precedes the head NP, the modification of a noun by an adjective, and the relations mediated by zero anaphora. The JP corpus provides only the function label to each particle “to” in the text. We determined the argument phrases marked by the “to” particles labeled as (nominal or clausal) argument-markers in a similar way to Case 1 above and identified the predicate words as the lexical heads of the phrases to which the PPto phrases attach. 3.2 Phrase structures to CCG derivations This step consists of three procedures (Fig. 9): 1. Add constraints on categories and features to tree nodes as far as possible and assign a combinatory rule to each branching. 2. Apply combinatory rules to all branching and obtain CCG derivations. 3. Add feature constraints to terminal nodes. 3.2.1 Local constraint on derivations According to the phrase structures, the first procedure in Step 2 imposes restrictions on the resulting CCG derivations. To describe the restrictions, we focus on some of the notable constructions and illustrate the restrictions for each of them. Phrases headed by case marker particles A phrase of this type must be either an argument (Fig. 10, upper) or a modifier (Fig. 10, lower) of a predicative. Distinction between the two is made based on the pred-arg annotation of the predicative. If a phrase is found to be an argument, 1) category NP is assigned to the corresponding node, 2) the case feature of the category is given according to the particle (in the case of Fig. 10 (upper), 1047 VP Verb 話さ Speak-NEG Aux なかっ not-CONT Aux た PAST-BASE VP Scont\S Sbase\S “did not speak” < or <B < or <B Scont\NPga Sneg\NPga 話さ Speak-NEG Scont\Sneg なかっ not-CONT Sbase\Scont た PAST-BASE Sbase\NPga Figure 11: An auxiliary verb and its conversion. VP Verb 調べ inquire-NEG VerbSuffix させる cause-BASE 彼女 に her-DAT PP VP ARG-ga “(to) have her inquire” < S\NPni[1] S\S させる cause-BASE 彼女 に her-DAT NPni[1] S S\NPni[1] 調べ inquire-NEG ga [1] NPni[1] ga: [1] Figure 12: A causative construction. ni for dative), and 3) the combinatory rule that combines the particle phrase and the predicative phrase is assigned backward function application rule (<). Otherwise, a category T/T is assigned to the corresponding modifier node and the rule will be forward function application (>). Auxiliary verbs As described in Sec. 2.2, an auxiliary verb is always given the category S\S and is combined with a verbal phrase via < or <B (Fig. 11). Furthermore, we assign the form feature value of the returning category S according to the inflection form of the auxiliary. In the case shown in the figure, Sbase\S is assigned for “た/PASTBASE” and Scont\S for “なかっ/not-CONT”. As a result of this restriction, we can obtain conditions for every auxiliary agglutination because the two form values in S\S are both restricted after applying combinatory rules (Sec. 3.2.2). Case alternations In addition to the argument/adjunct distinction illustrated above, a process is needed for argument phrases of predicates involving case alternation. Such predicates are either causative (see Fig. 12) or passive verbs and can be detected by voice annotation from the NAIST corpus. For an argument of that type of verb, its deep case (ga for Fig. 12) must be used to construct the semantic representation, namely the PAS. As well as assigning the shallow case value (ni in Fig. 12) to the argument’s category NP, as usual, we assign a restriction to the PAS S\NPo[1] S\NPo 買っ buy-CONT S\S た PAST-ATTR NP 本 book NP NP[1]/NP[1] VP Verb 買っ buy-CONT Aux た PAST-ATTR Noun 本 book NP S\NP[1] NP[1]/NP[1] Noun 店 store NP 本 を book-ACC PP VP Verb 買っ buy-CONT Aux た PAST-ATTR VP X S NP/NP NP 店 store NP NP/NP 本 を book-ACC NPo S S\NPo S\NPo 買っ buy-CONT S\S た PAST-ATTR “a store where (I) bought the book” “a book which (I) bought” Figure 13: A relative clause with/without argument extraction (upper/lower, respectively). of the verb so that the semantic argument corresponding to the deep case is co-indexed with the argument NP. These restrictions are then utilized for PAS construction in Sec. 3.2.3. Relative clauses A relative clause can be detected as a subtree that has a VP as its left child and an NP as its right child, as shown in Fig. 13. The conversion of the subtree consists of 1) inserting a node on the top of the left VP (see the right-hand side of Fig. 13), and 2) assigning the appropriate unary rule to make the new node. The difference between candidate rules RelExt and RelIn (see Fig. 4) is whether the right-hand NP is an obligatory argument of the VP or not, which can be determined by the pred-arg annotation on the predicate in the VP. In the upper example in Fig. 13, RelIn is assigned because the right NP “book” is annotated as an accusative argument of the predicate “buy”. In contrast, RelExt is assigned in the lower side in the figure because the right NP “store” is not annotated as an argument. Continuous clauses A continuous clause can be detected as a subtree with a VP of continuous form as its left child and a VP as its right child. Its conversion is similar to that of a relative clause, and only differs in that the candidate rules are Con and ConCoord. ConCoord generates a continuous clause that shares arguments with the main clause while Con produces one without shared arguments. Rule assignment is done by comparing the pred-arg annotations of the two phrases. 1048 Training Develop. Test #Sentences 24,283 4,833 9,284 #Chunks 234,685 47,571 89,874 #Words 664,898 136,585 255,624 Table 4: Statistics of input linguistic resources. 3.2.2 Inverse application of rules The second procedure in Step 2 begins with assigning a category S to the root node. A combinatory rule assigned to each branching is then “inversely” applied so that the constraint assigned to the parent transfers to the children. 3.2.3 Constraints on terminal nodes The final process consists of a) imposing restrictions on the terminal category in order to instantiate all the feature values, and b) constructing a PAS for each verbal terminal. An example of process a) includes setting the form features in the verb category, such as S\NPni, according to the inflection form of the verb. As for b), arguments in a PAS are given according to the category and the partial restriction. For instance, if a category S\NPni is obtained for “調べ/inquire” (Fig. 12), the PAS for “inquire” is unary because the category has one argument category (NPni), and the category is co-indexed with the semantic argument ga in the PAS due to the partial restriction depicted in Sec. 3.2.1. As a result, a lexical entry is obtained as 調べ⊢S\NPni[1]: inquire([1]). 3.3 Lexical entries Finally, lexical rules are applied to each of the obtained lexical entries in order to reduce them to the canonical form. Since words in the corpus (especially verbs) often involve pro-drop and scrambling, there are a lot of obtained entries that have slightly varied categories yet share a PAS. We assume that an obtained entry is a variation of the canonical one and register the canonical entries in the lexicon. We treat only subject deletion for prodrop because there is not sufficient information to judge the deletion of other arguments. Scrambling is simply treated as permutation of arguments. 4 Evaluation We used the following for the implementation of our resources: Kyoto corpus ver. 4.02, NAIST text 2http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php? Kyoto\%20University\%20Text\%20Corpus Training Develop. Test St.1 St.2 St.1 St.2 St.1 St.2 Sent. 24,283 24,116 4,833 4,803 9,284 9,245 Converted 24,116 22,820 4,803 4,559 9,245 8,769 Con. rate 99.3 94.6 99.4 94.9 99.6 94.9 Table 5: Statistics of corpus conversion. Sentential Coverage Covered Uncovered Cov. (%) Devel. 3,920 639 85.99 Test 7,610 1,159 86.78 Lexical Coverage Word Known Unknown combi. cat. word Devel. 127,144 126,383 682 79 0 Test 238,083 236,651 1,242 145 0 Table 6: Sentential and lexical coverage. corpus ver. 1.53, and JP corpus ver. 1.04. The integrated corpus is divided into training, development, and final test sets following the standard data split in previous works on Japanese dependency parsing (Kudo and Matsumoto, 2002). The details of these resources are shown in Table 4. 4.1 Corpus conversion and lexicon extraction Table 5 shows the number of successful conversions performed by our method. In total, we obtained 22,820 CCG derivations from 24,283 sentences (in the training set), resulting in the total conversion rate of 93.98%. The table shows we lost more sentences in Step 2 than in Step 1. This is natural because Step 2 imposed more restrictions on resulting structures and therefore detected more discrepancies including compounding errors. Our conversion rate is about 5.5 points lower than the English counterpart (Hockenmaier and Steedman, 2007). Manual investigation of the sampled derivations would be beneficial for the conversion improvement. For the lexicon extraction from the CCGbank, we obtained 699 types of lexical categories from 616,305 word tokens. After lexical reduction, the number of categories decreased to 454, which in turn may produce 5,342 categories by lexical expansion. The average number of categories for a word type was 11.68 as a result. 4.2 Evaluation of coverage Following the evaluation criteria in (Hockenmaier and Steedman, 2007), we measured the coverage 3http://cl.naist.jp/nldata/corpus/ 4https://alaginrc.nict.go.jp/resources/tocorpus/ tocorpusabstract.html 1049 of the grammar on unseen texts. First, we obtained CCG derivations for evaluation sets by applying our conversion method and then used these derivations as gold standard. Lexical coverage indicates the number of words to which the grammar assigns a gold standard category. Sentential coverage indicates the number of sentences in which all words are assigned gold standard categories 5. Table 6 shows the evaluation results. Lexical coverage was 99.40% with rare word treatment, which is in the same level as the case of the English CCG parser C&C (Clark and Curran, 2007). We also measured coverage in a “weak” sense, which means the number of sentences that are given at least one analysis (not necessarily correct) by the obtained grammar. This number was 99.12 % and 99.06 % for the development and the test set, respectively, which is sufficiently high for wide-coverage parsing of real-world texts. 4.3 Evaluation of parsing accuracy Finally, we evaluated the parsing accuracy. We employed the parser and the supertagger of (Miyao and Tsujii, 2008), specifically, its generalized modules for lexicalized grammars. We trained log-linear models in the same way as (Clark and Curran, 2007) using the training set as training data. Feature sets were simply borrowed from an English parser; no tuning was performed. Following conventions in research on Japanese dependency parsing, gold morphological analysis results were input to a parser. Following C&C, the evaluation measure was precision and recall over dependencies, where a dependency is defined as a 4-tuple: a head of a functor, a functor category, an argument slot, and a head of an argument. Table 7 shows the parsing accuracy on the development and the test sets. The supertagging accuracy is presented in the upper table. While our coverage was almost the same as C&C, the performance of our supertagger and parser was lower. To improve the performance, tuning disambiguation models for Japanese is a possible approach. Comparing the parser’s performance with previous works on Japanese dependency parsing is difficult as our figures are not directly comparable to theirs. Sassano and Kurohashi (2009) reported the accuracy of their parser as 88.48 and 95.09 5Since a gold derivation can logically be obtained if gold categories are assigned to all words in a sentence, sentential coverage means that the obtained lexicon has the ability to produce exactly correct derivations for those sentences. Supertagging accuracy Lex. Cov. Cat. Acc. Devel. 99.40 90.86 Test 99.40 90.69 C&C 99.63 94.32 Overall performance LP LR LF UP UR UF Devel. 82.55 82.73 82.64 90.02 90.22 90.12 Test 82.40 82.59 82.50 89.95 90.15 90.05 C&C 88.34 86.96 87.64 93.74 92.28 93.00 Table 7: Parsing accuracy. LP, LR and LF refer to labeled precision, recall, and F-score respectively. UP, UR, and UF are for unlabeled. in unlabeled chunk-based and word-based F1 respectively. While our score of 90.05 in unlabeled category dependency seems to be lower than their word-based score, this is reasonable because our category dependency includes more difficult problems, such as whether a subject PP is shared by coordinated verbs. Thus, our parser is expected to be capable of real-world Japanese text analysis as well as dependency parsers. 5 Conclusion In this paper, we proposed a method to induce wide-coverage Japanese resources based on CCG that will lead to deeper syntactic analysis for Japanese and presented empirical evaluation in terms of the quality of the obtained lexicon and the parsing accuracy. Although our work is basically in line with CCGbank, the application of the method to Japanese is not trivial due to the fact that the relationship between chunk-based dependency structures and CCG derivations is not obvious. Our method integrates multiple dependencybased resources to convert them into an integrated phrase structure treebank. The obtained treebank is then transformed into CCG derivations. The empirical evaluation in Sec. 4 shows that our corpus conversion successfully converts 94 % of the corpus sentences and the coverage of the lexicon is 99.4 %, which is sufficiently high for analyzing real-world texts. A comparison of the parsing accuracy with previous works on Japanese dependency parsing and English CCG parsing indicates that our parser can analyze real-world Japanese texts fairly well and that there is room for improvement in disambiguation models. 1050 References Daisuke Bekki. 2010. Formal Theory of Japanese Syntax. Kuroshio Shuppan. (In Japanese). Johan Bos, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Widecoverage semantic representations from a CCG parser. In Proceedings of COLING 2004, pages 1240–1246. Johan Bos, Cristina Bosco, and Alessandro Mazzei. 2009. Converting a dependency treebank to a categorial grammar treebank for Italian. In Proceedings of the Eighth International Workshop on Treebanks and Linguistic Theories (TLT8), pages 27–38. Johan Bos. 2007. Recognising textual entailment and computational semantics. In Proceedings of Seventh International Workshop on Computational Semantics IWCS-7, page 1. Ruken C¸ akıcı. 2005. Automatic induction of a CCG grammar for Turkish. In Proceedings of ACL Student Research Workshop, pages 73–78. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4). Takao Gunji. 1987. Japanese Phrase Structure Grammar: A Unification-based Approach. D. Reidel. Hiroki Hanaoka, Hideki Mima, and Jun’ichi Tsujii. 2010. A Japanese particle corpus built by examplebased annotation. In Proceedings of LREC 2010. Yuta Hayashibe, Mamoru Komachi, and Yuji Matsumoto. 2011. Japanese predicate argument structure analysis exploiting argument position and type. In Proceedings of IJCNLP 2011, pages 201–209. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. Julia Hockenmaier. 2006. Creating a CCGbank and a wide-coverage CCG lexicon for German. In Proceedings of the Joint Conference of COLING/ACL 2006. Ryu Iida and Massimo Poesio. 2011. A cross-lingual ILP solution to zero anaphora resolution. In Proceedings of ACL-HLT 2011, pages 804–813. Ryu Iida, Mamoru Komachi, Kentaro Inui, and Yuji Matsumoto. 2007. Annotating a Japanese text corpus with predicate-argument and coreference relations. In Proceedings of Linguistic Annotation Workshop, pages 132–139. Daisuke Kawahara and Sadao Kurohashi. 2011. Generative modeling of coordination by factoring parallelism and selectional preferences. In Proceedings of IJCNLP 2011. Daisuke Kawahara, Sadao Kurohashi, and Koiti Hasida. 2002. Construction of a Japanese relevance-tagged corpus. In Proceedings of the 8th Annual Meeting of the Association for Natural Language Processing, pages 495–498. (In Japanese). Nobo Komagata. 1999. Information Structure in Texts: A Computational Analysis of Contextual Appropriateness in English and Japanese. Ph.D. thesis, University of Pennsylvania. Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analyisis using cascaded chunking. In Proceedings of CoNLL 2002. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Yusuke Miyao and Jun’ichi Tsujii. 2008. Feature forest models for probabilistic HPSG parsing. Computational Linguistics, 34(1):35–80. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Ivan A. Sag, Thomas Wasow, and Emily M. Bender. 2003. Syntactic Theory: A Formal Introduction, 2nd Edition. CSLI Publications. Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to Japanese zero anaphora resolution with large-scale lexicalized case frames. In Proceedings of IJCNLP 2011. Manabu Sassano and Sadao Kurohashi. 2009. A unified single scan algorithm for Japanese base phrase chunking and dependency parsing. In Proceedings of ACL-IJCNLP 2009. Melanie Siegel and Emily M. Bender. 2002. Efficient deep processing of Japanese. In Proceedings of the 3rd Workshop on Asian Language Resources and International Standardization. Mark Steedman. 2001. The Syntactic Process. MIT Press. David Vadas and James Curran. 2007. Adding noun phrase structure to the Penn Treebank. In Proceedings of ACL 2007, pages 240–247. Emiko Yamada, Eiji Aramaki, Takeshi Imai, and Kazuhiko Ohe. 2010. Internal structure of a disease name and its application for ICD coding. Studies in health technology and informatics, 160(2):1010– 1014. Kazuhiro Yoshida. 2005. Corpus-oriented development of Japanese HPSG parsers. In Proceedings of the ACL Student Research Workshop. 1051
2013
103
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1052–1062, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Transition-based Dependency Parsing with Selectional Branching Jinho D. Choi Department of Computer Science University of Massachusetts Amherst Amherst, MA, 01003, USA [email protected] Andrew McCallum Department of Computer Science University of Massachusetts Amherst Amherst, MA, 01003, USA [email protected] Abstract We present a novel approach, called selectional branching, which uses confidence estimates to decide when to employ a beam, providing the accuracy of beam search at speeds close to a greedy transition-based dependency parsing approach. Selectional branching is guaranteed to perform a fewer number of transitions than beam search yet performs as accurately. We also present a new transition-based dependency parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. With the standard setup, our parser shows an unlabeled attachment score of 92.96% and a parsing speed of 9 milliseconds per sentence, which is faster and more accurate than the current state-of-the-art transitionbased parser that uses beam search. 1 Introduction Transition-based dependency parsing has gained considerable interest because it runs fast and performs accurately. Transition-based parsing gives complexities as low as O(n) and O(n2) for projective and non-projective parsing, respectively (Nivre, 2008).1 The complexity is lower for projective parsing because a parser can deterministically skip tokens violating projectivity, while this property is not assumed for non-projective parsing. Nonetheless, it is possible to perform non-projective parsing in expected linear time because the amount of nonprojective dependencies is notably smaller (Nivre and Nilsson, 2005) so a parser can assume projectivity for most cases while recognizing ones for which projectivity should not be assumed (Nivre, 2009; Choi and Palmer, 2011). 1We refer parsing approaches that produce only projective dependency trees as projective parsing and both projective and non-projective dependency trees as non-projective parsing. Greedy transition-based dependency parsing has been widely deployed because of its speed (Cer et al., 2010); however, state-of-the-art accuracies have been achieved by globally optimized parsers using beam search (Zhang and Clark, 2008; Huang and Sagae, 2010; Zhang and Nivre, 2011; Bohnet and Nivre, 2012). These approaches generate multiple transition sequences given a sentence, and pick one with the highest confidence. Coupled with dynamic programming, transition-based dependency parsing with beam search can be done very efficiently and gives significant improvement to parsing accuracy. One downside of beam search is that it always uses a fixed size of beam even when a smaller size of beam is sufficient for good results. In our experiments, a greedy parser performs as accurately as a parser that uses beam search for about 64% of time. Thus, it is preferred if the beam size is not fixed but proportional to the number of low confidence predictions that a greedy parser makes, in which case, fewer transition sequences need to be explored to produce the same or similar parse output. We first present a new transition-based parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. We then introduce selectional branching that uses confidence estimates to decide when to employ a beam. With our new approach, we achieve a higher parsing accuracy than the current state-of-the-art transition-based parser that uses beam search and a much faster speed. 2 Transition-based dependency parsing We introduce a transition-based dependency parsing algorithm that is a hybrid between Nivre’s arceager and list-based algorithms (Nivre, 2003; Nivre, 2008). Nivre’s arc-eager is a projective parsing algorithm showing a complexity of O(n). Nivre’s list-based algorithm is a non-projective parsing algorithm showing a complexity of O(n2). Table 1 shows transitions in our algorithm. The top 4 and 1052 Transition Current state ⇒Resulting state LEFTl-REDUCE ( [σ|i], δ, [j|β], A ) ⇒( σ, δ, [j|β], A ∪{i l←j} ) RIGHTl-SHIFT ( [σ|i], δ, [j|β], A ) ⇒( [σ|i|δ|j], [ ], β, A ∪{i l→j} ) NO-SHIFT ( [σ|i], δ, [j|β], A ) ⇒( [σ|i|δ|j], [ ], β, A ) NO-REDUCE ( [σ|i], δ, [j|β], A ) ⇒( σ, δ, [j|β], A ) LEFTl-PASS ( [σ|i], δ, [j|β], A ) ⇒( σ, [i|δ], [j|β], A ∪{i l←j} ) RIGHTl-PASS ( [σ|i], δ, [j|β], A ) ⇒( σ, [i|δ], [j|β], A ∪{i l→j} ) NO-PASS ( [σ|i], δ, [j|β], A ) ⇒( σ, [i|δ], [j|β], A ) Table 1: Transitions in our dependency parsing algorithm. Transition Preconditions LEFTl-∗ [i ̸= 0] ∧¬[∃k. (i ←k) ∈A] ∧¬[(i →∗j) ∈A] RIGHTl-∗ ¬[∃k. (k →j) ∈A] ∧¬[(i ← ∗ j) ∈A] ∗-SHIFT ¬[∃k ∈σ. (k ̸= i) ∧((k ←j) ∨(k →j))] ∗-REDUCE [∃h. (h →i) ∈A] ∧¬[∃k ∈β. (i →k)] Table 2: Preconditions of the transitions in Table 1 (∗is a wildcard representing any transition). the bottom 3 transitions are inherited from Nivre’s arc-eager and list-based algorithms, respectively.2 Each parsing state is represented as a tuple (σ, δ, β, A), where σ is a stack containing processed tokens, δ is a deque containing tokens popped out of σ but will be pushed back into σ in later parsing states to handle non-projectivity, and β is a buffer containing unprocessed tokens. A is a set of labeled arcs. (i, j) represent indices of their corresponding tokens (wi, wj), l is a dependency label, and the 0 identifier corresponds to w0, introduced as the root of a tree. The initial state is ([0], [ ], [1, . . . , n], ∅), and the final state is (σ, δ, [ ], A). At any parsing state, a decision is made by comparing the top of σ, wi, and the first element of β, wj. This decision is consulted by gold-standard trees during training and a classifier during decoding. LEFTl-∗and RIGHTl-∗are performed when wj is the head of wi with a dependency label l, and vice versa. After LEFTl-∗or RIGHTl-∗, an arc is added to A. NO-∗is performed when no dependency is found for wi and wj. ∗-SHIFT is performed when no dependency is found for wj and any token in σ other than wi. After ∗-SHIFT, all tokens in δ as well as wj are pushed into σ. ∗-REDUCE is performed when wi already has the head, and wi is not the head of any token in β. After ∗-REDUCE, wi is popped out of σ. ∗-PASS is performed when neither ∗-SHIFT nor ∗-REDUCE can be performed. After ∗-PASS, wi is moved to the front of δ so it 2The parsing complexity of a transition-based dependency parsing algorithm is determined by the number of transitions performed with respect to the number of tokens in a sentence, say n (Kübler et al., 2009). can be compared to other tokens in β later. Each transition needs to satisfy certain preconditions to ensure the properties of a well-formed dependency graph (Nivre, 2008); they are described in Table 2. (i ←j) and (i ← ∗ j) indicate that wj is the head and an ancestor of wi with any label, respectively. When a parser is trained on only projective trees, our algorithm learns only the top 4 transitions and produces only projective trees during decoding. In this case, it performs at most 2n −1 transitions per sentence so the complexity is O(n). When a parser is trained on a mixture of projective and nonprojective trees, our algorithm learns all transitions and produces both kinds of trees during decoding. In this case, it performs at most n(n+1) 2 transitions so the complexity is O(n2). However, because of the presence of ∗-SHIFT and ∗-REDUCE, our algorithm is capable of skipping or removing tokens during non-projective parsing, which allows it to show a linear time parsing speed in practice. 70 0 10 20 30 40 50 60 130 0 20 40 60 80 100 Sentence length Transitions Figure 1: The # of transitions performed during training with respect to sentence lengths for Dutch. 1053 Transition σ δ β A 0 Initialization [0] [ ] [1|β] ∅ 1 NO-SHIFT [σ|1] [ ] [2|β] 2 NO-SHIFT [σ|2] [ ] [3|β] 3 NO-SHIFT [σ|3] [ ] [4|β] 4 LEFT-REDUCE [σ|2] [ ] [4|β] A ∪{3 ←NSUBJ−4} 5 NO-PASS [σ|1] [2] [4|β] 6 RIGHT-SHIFT [σ|4] [ ] [5|β] A ∪{1 −RCMOD→4} 7 NO-SHIFT [σ|5] [ ] [6|β] 8 LEFT-REDUCE [σ|4] [ ] [6|β] A ∪{5 ←AUX−6} 9 RIGHT-PASS [σ|2] [4] [6|β] A ∪{4 −XCOMP→6} 10 LEFT-REDUCE [σ|1] [4] [6|β] A ∪{2 ←DOBJ−6} 11 NO-SHIFT [σ|6] [ ] [7|β] 12 NO-REDUCE [σ|4] [ ] [7|β] 13 NO-REDUCE [σ|1] [ ] [7|β] 14 LEFT-REDUCE [0] [ ] [7|β] A ∪{1 ←NSUBJ−7} 15 RIGHT-SHIFT [σ|7] [ ] [8] A ∪{0 −ROOT→7} 16 RIGHT-SHIFT [σ|8] [ ] [ ] A ∪{7 −ADV→8} Table 3: A transition sequence generated by our parsing algorithm using gold-standard decisions. Figure 1 shows the total number of transitions performed during training with respect to sentence lengths for Dutch. Among all languages distributed by the CoNLL-X shared task (Buchholz and Marsi, 2006), Dutch consists of the highest number of non-projective dependencies (5.4% in arcs, 36.4% in trees). Even with such a high number of nonprojective dependencies, our parsing algorithm still shows a linear growth in transitions. Table 3 shows a transition sequence generated by our parsing algorithm using gold-standard decisions. After w3 and w4 are compared, w3 is popped out of σ (state 4) so it is not compared to any other token in β (states 9 and 13). After w2 and w4 are compared, w2 is moved to δ (state 5) so it can be compared to other tokens in β (state 10). After w4 and w6 are compared, RIGHT-PASS is performed (state 9) because there is a dependency between w6 and w2 in σ (state 10). After w6 and w7 are compared, w6 is popped out of σ (state 12) because it is not needed for later parsing states. 3 Selectional branching 3.1 Motivation For transition-based parsing, state-of-the-art accuracies have been achieved by parsers optimized on multiple transition sequences using beam search, which can be done very efficiently when it is coupled with dynamic programming (Zhang and Clark, 2008; Huang and Sagae, 2010; Zhang and Nivre, 2011; Huang et al., 2012; Bohnet and Nivre, 2012). Despite all the benefits, there is one downside of this approach; it generates a fixed number of transition sequences no matter how confident the onebest sequence is.3 If every prediction leading to the one-best sequence is confident, it may not be necessary to explore more sequences to get the best output. Thus, it is preferred if the beam size is not fixed but proportional to the number of low confidence predictions made for the one-best sequence. The selectional branching method presented here performs at most d · t −e transitions, where t is the maximum number of transitions performed to generate a transition sequence, d = min(b, |λ|+1), b is the beam size, |λ| is the number of low confidence predictions made for the one-best sequence, and e = d(d−1) 2 . Compared to beam search that always performs b · t transitions, selectional branching is guaranteed to perform fewer transitions given the same beam size because d ≤b and e > 0 except for d = 1, in which case, no branching happens. With selectional branching, our parser shows slightly 3The ‘one-best sequence’ is a transition sequence generated by taking only the best prediction at each parsing state. 1054 higher parsing accuracy than the current state-ofthe-art transition-based parser using beam search, and performs about 3 times faster. 3.2 Branching strategy Figure 2 shows an overview of our branching strategy. sij represents a parsing state, where i is the index of the current transition sequence and j is the index of the current parsing state (e.g., s12 represents the 2nd parsing state in the 1st transition sequence). pkj represents the k’th best prediction (in our case, it is a predicted transition) given s1j (e.g., p21 is the 2nd-best prediction given s11). s11 s12 p11 s22 … … s1t p12 … … s2t p21 s33 p22 … s3t sdt … … … p2j T1 = T2 = T3 = Td = p1j Figure 2: An overview of our branching strategy. Each sequence Ti>1 branches from T1. Initially, the one-best sequence T1 = [s11, ... , s1t] is generated by a greedy parser. While generating T1, the parser adds tuples (s1j, p2j), ... , (s1j, pkj) to a list λ for each low confidence prediction p1j given s1j.4 Then, new transition sequences are generated by using the b highest scoring predictions in λ, where b is the beam size. If |λ| < b, all predictions in λ are used. The same greedy parser is used to generate these new sequences although it now starts with s1j instead of an initial parsing state, applies pkj to s1j, and performs further transitions. Once all transition sequences are generated, a parse tree is built from a sequence with the highest score. For our experiments, we set k = 2, which gave noticeably more accurate results than k = 1. We also experimented with k > 2, which did not show significant improvement over k = 2. Note that assigning a greater k may increase |λ| but not the total number of transition sequences generated, which is restricted by the beam size, b. Since each sequence Ti>1 branches from T1, selectional branching performs fewer transitions than beam search: at least d(d−1) 2 transitions are inherited from T1, 4λ is initially empty, which is hidden in Figure 2. where d = min(b, |λ| + 1); thus, it performs that many transitions less than beam search (see the left lower triangle in Figure 2). Furthermore, selectional branching generates a d number of sequences, where d is proportional to the number of low confidence predictions made by T1. To sum up, selectional branching generates the same or fewer transition sequences than beam search and each sequence Ti>1 performs fewer transitions than T1; thus, it performs faster than beam search in general given the same beam size. 3.3 Finding low confidence predictions For each parsing state sij, a prediction is made by generating a feature vector xij ∈X, feeding it into a classifier C1 that uses a feature map Φ(x, y) and a weight vector w to measure a score for each label y ∈Y, and choosing a label with the highest score. When there is a tie between labels with the highest score, the first one is chosen. This can be expressed as a logistic regression: C1(x) = arg max y∈Y {f(x, y)} f(x, y) = exp(w · Φ(x, y)) P y′∈Y exp(w · Φ(x, y′)) To find low confidence predictions, we use the margins (score differences) between the best prediction and the other predictions. If all margins are greater than a threshold, the best prediction is considered highly confident; otherwise, it is not. Given this analogy, the k-best predictions can be found as follows (m ≥0 is a margin threshold): Ck(x, m) = K arg max y∈Y {f(x, y)} s.t. f(x, C1(x)) −f(x, y) ≤m ‘K arg max’ returns a set of k′ labels whose margins to C1(x) are smaller than any other label’s margin to C1(x) and also ≤m, where k′ ≤k. When m = 0, it returns a set of the highest scoring labels only, including C1(x). When m = 1, it returns a set of all labels. Given this, a prediction is considered not confident if |Ck(x, m)| > 1. 3.4 Finding the best transition sequence Let Pi be a list of all predictions that lead to generate a transition sequence Ti. The predictions in Pi are either inherited from T1 or made specifically for Ti. In Figure 2, P3 consists of p11 as its first prediction, p22 as its second prediction, and 1055 further predictions made specifically for T3. The score of each prediction is measured by f(x, y) in Section 3.3. Then, the score of Ti is measured by averaging scores of all predictions in Pi. score(Ti) = P p∈Pi score(p) |Pi| Unlike Zhang and Clark (2008), we take the average instead of the sum of all prediction scores. This is because our algorithm does not guarantee the same number of transitions for every sequence, so the sum of all scores would weigh more on sequences with more transitions. We experimented with both the sum and the average, and taking the average led to slightly higher parsing accuracy. 3.5 Bootstrapping transition sequences During training, a training instance is generated for each parsing state sij by taking a feature vector xij and its true label yij. To generate multiple transition sequences during training, the bootstrapping technique of Choi and Palmer (2011) is used, which is described in Algorithm 1.5 Algorithm 1 Bootstrapping Input: Dt: training set, Dd: development set. Output: A model M. 1: r ←0 2: I ←getTrainingInstances(Dt) 3: M0 ←buildModel(I) 4: S0 ←getScore(Dd, M0) 5: while (r = 0) or (Sr−1 < Sr) do 6: r ←r + 1 7: I ←getTrainingInstances(Dt, Mr−1) 8: Mr ←buildModel(I) 9: Sr ←getScore(Dd, Mr) 10: return Mr−1 First, an initial model M0 is trained on all data by taking the one-best sequences, and its score is measured by testing on a development set (lines 2-4). Then, the next model Mr is trained on all data but this time, Mr−1 is used to generate multiple transition sequences (line 7-8). Among all transition sequences generated by Mr−1, training instances from only T1 and Tg are used to train Mr, where T1 is the one-best sequence and Tg is a sequence giving the most accurate parse output compared to the gold-standard tree. The score of Mr is measured (line 9), and repeat the procedure if Sr−1 < Sr; otherwise, return the previous model Mr−1. 5Alternatively, the dynamic oracle approach of Goldberg and Nivre (2012) can be used to generate multiple transition sequences, which is expected to show similar results. 3.6 Adaptive subgradient algorithm To build each model during bootstrapping, we use a stochastic adaptive subgradient algorithm called ADAGRAD that uses per-coordinate learning rates to exploit rarely seen features while remaining scalable (Duchi et al., 2011).This is suitable for NLP tasks where rarely seen features often play an important role and training data consists of a large number of instances with high dimensional features. Algorithm 2 shows our adaptation of ADAGRAD with logistic regression for multi-class classification. Note that when used with logistic regression, ADAGRAD takes a regular gradient instead of a subgradient method for updating weights. For our experiments, ADAGRAD slightly outperformed learning algorithms such as average perceptron (Collins, 2002) or Liblinear SVM (Hsieh et al., 2008). Algorithm 2 ADAGRAD + logistic regression Input: D = {(xi, yi)}n i=1 s.t. xi ∈X, yi ∈Y Φ(x, y) ∈Rd s.t. d = dimension(X) × |Y| T: iterations, α: learning rate, ρ: ridge Output: A weight vector w ∈Rd. 1: w ←0, where w ∈Rd 2: G ←0, where G ∈Rd 3: for t ←1 . . . T do 4: for i ←1 . . . n do 5: Q∀y∈Y ←I(yi, y) −f(xi, y), s.t. Q ∈R|Y| 6: ∂←P y∈Y(Φ(xi, y) · Qy) 7: G ←G + ∂◦∂ 8: for j ←1 . . . d do 9: wj ←wj + α · 1 ρ+√Gj · ∂j I(y, y′) = ( 1 y = y′ 0 otherwise The algorithm takes three hyper-parameters; T is the number of iterations, α is the learning rate, and ρ is the ridge (T > 0, α > 0, ρ ≥0). G is our running estimate of a diagonal covariance matrix for the gradients (per-coordinate learning rates). For each instance, scores for all labels are measured by the logistic regression function f(x, y) in Section 3.3. These scores are subtracted from an output of the indicator function I(y, y′), which forces our model to keep learning this instance until the prediction is 100% confident (in other words, until the score of yi becomes 1). Then, a subgradient is measured by taking all feature vectors together weighted by Q (line 6). This subgradient is used to update G and w, where ◦is the Hadamard product (lines 7-9). ρ is a ridge term to keep the inverse covariance well-conditioned. 1056 4 Experiments 4.1 Corpora For projective parsing experiments, the Penn English Treebank (Marcus et al., 1993) is used with the standard split: sections 2-21 for training, 22 for development, and 23 for evaluation. All constituent trees are converted with the head-finding rules of Yamada and Matsumoto (2003) and the labeling rules of Nivre (2006). For non-projective parsing experiments, four languages from the CoNLLX shared task are used: Danish, Dutch, Slovene, and Swedish (Buchholz and Marsi, 2006). These languages are selected because they contain nonprojective trees and are publicly available from the CoNLL-X webpage.6 Since the CoNLL-X data we have does not come with development sets, the last 10% of each training set is used for development. 4.2 Feature engineering For English, we mostly adapt features from Zhang and Nivre (2011) who have shown state-of-the-art parsing accuracy for transition-based dependency parsing. Their distance features are not included in our approach because they do not seem to show meaningful improvement. Feature selection is done on the English development set. For the other languages, the same features are used with the addition of morphological features provided by CoNLL-X; specifically, morphological features from the top of σ and the front of β are added as unigram features. Moreover, all POS tag features from English are duplicated with coarsegrained POS tags provided by CoNLL-X. No more feature engineering is done for these languages; it is possible to achieve higher performance by using different features, especially when these languages contain non-projective dependencies whereas English does not, which we will explore in the future. 4.3 Development Several parameters need to be optimized during development. For ADAGRAD, T, α, and ρ need to be tuned (Section 3.6). For bootstrapping, the number of iterations, say r, needs to be tuned (Section 3.5). For selectional branching, the margin threshold m and the beam size b need to be tuned (Section 3.3). First, all parameters are tuned on the English development set by using grid search on T = [1, . . . , 10], α = [0, 01, 0, 02], ρ = [0.1, 0.2], r = [1, 2, 3], 6http://ilk.uvt.nl/conll/ m = [0.83, . . . , 0.92], and b = [16, 32, 64, 80]. As a result, the following parameters are found: α = 0.02, ρ = 0.1, m = 0.88, and b = 64|80. For this development set, the beam size of 64 and 80 gave the exact same result, so we kept the one with a larger beam size (b = 80). 0.92 0.83 0.86 0.88 0.9 91.2 91 91.04 91.08 91.12 91.16 Margin Accuracy 64|80 32 16 b = Figure 3: Parsing accuracies with respect to margins and beam sizes on the English development set. b = 64|80: the black solid line with solid circles, b = 32: the blue dotted line with hollow circles, b = 16: the red dotted line with solid circles. Figure 3 shows parsing accuracies with respect to different margins and beam sizes on the English development set. These parameters need to be tuned jointly because different margins prefer different beam sizes. For instance, m = 0.85 gives the highest accuracy with b = 32, but m = 0.88 gives the highest accuracy with b = 64|80. 14 0 2 4 6 8 10 12 92 88.5 89 89.5 90 90.5 91 91.5 Iteration Accuracy UAS LAS Figure 4: Parsing accuracies with respect to ADAGRAD and bootstrap iterations on the English development set when α = 0.02, ρ = 0.1, m = 0.88, and b = 64|80. UAS: unlabeled attachment score, LAS: labeled attachment score. Figure 4 shows parsing accuracies with respect to ADAGRAD and bootstrap iterations on the English development set. The range 1-5 shows results of 5 ADAGRAD iterations before bootstrapping, the range 6-9 shows results of 4 iterations during the 1057 first bootstrapping, and the range 10-14 shows results of 5 iterations during the second bootstrapping. Thus, the number of bootstrap iteration is 2 where each bootstrapping takes a different number of ADAGRAD iterations. Using an Intel Xeon 2.57GHz machine, it takes less than 40 minutes to train the entire Penn Treebank, which includes times for IO, feature extraction and bootstrapping. 80 0 10 20 30 40 50 60 70 1,200,000 0 200,000 400,000 600,000 800,000 1,000,000 Beam size = 1, 2, 4, 8, 16, 32, 64, 80 Transitions Figure 5: The total number of transitions performed during decoding with respect to beam sizes on the English development set. Figure 5 shows the total number of transitions performed during decoding with respect to beam sizes on the English development set (1,700 sentences, 40,117 tokens). With selectional branching, the number of transitions grows logarithmically as the beam size increases whereas it would have grown linearly if beam search were used. We also checked how often the one best sequence is chosen as the final sequence during decoding. Out of 1,700 sentences, the one best sequences are chosen for 1,095 sentences. This implies that about 64% of time, our greedy parser performs as accurately as our non-greedy parser using selectional branching. For the other languages, we use the same values as English for α, ρ, m, and b; only the ADAGRAD and bootstrap iterations are tuned on the development sets of the other languages. 4.4 Projective parsing experiments Before parsing, POS tags were assigned to the training set by using 20-way jackknifing. For the automatic generation of POS tags, we used the domainspecific model of Choi and Palmer (2012a)’s tagger, which gave 97.5% accuracy on the English evaluation set (0.2% higher than Collins (2002)’s tagger). Table 4 shows comparison between past and current state-of-the-art parsers and our approach. The first block shows results from transition-based dependency parsers using beam search. The second block shows results from other kinds of parsing approaches (e.g., graph-based parsing, ensemble parsing, linear programming, dual decomposition). The third block shows results from parsers using external data. The last block shows results from our approach. The Time column show how many seconds per sentence each parser takes.7 Approach UAS LAS Time Zhang and Clark (2008) 92.1 Huang and Sagae (2010) 92.1 0.04 Zhang and Nivre (2011) 92.9 91.8 0.03 Bohnet and Nivre (2012) 93.38 92.44 0.4 McDonald et al. (2005) 90.9 Mcdonald and Pereira (2006) 91.5 Sagae and Lavie (2006) 92.7 Koo and Collins (2010) 93.04 Zhang and McDonald (2012) 93.06 91.86 Martins et al. (2010) 93.26 Rush et al. (2010) 93.8 Koo et al. (2008) 93.16 Carreras et al. (2008) 93.54 Bohnet and Nivre (2012) 93.67 92.68 Suzuki et al. (2009) 93.79 bt = 80, bd = 80, m = 0.88 92.96 91.93 0.009 bt = 80, bd = 64, m = 0.88 92.96 91.93 0.009 bt = 80, bd = 32, m = 0.88 92.96 91.94 0.009 bt = 80, bd = 16, m = 0.88 92.96 91.94 0.008 bt = 80, bd = 8, m = 0.88 92.89 91.87 0.006 bt = 80, bd = 4, m = 0.88 92.76 91.76 0.004 bt = 80, bd = 2, m = 0.88 92.56 91.54 0.003 bt = 80, bd = 1, m = 0.88 92.26 91.25 0.002 bt = 1, bd = 1, m = 0.88 92.06 91.05 0.002 Table 4: Parsing accuracies and speeds on the English evaluation set, excluding tokens containing only punctuation. bt and bd indicate the beam sizes used during training and decoding, respectively. UAS: unlabeled attachment score, LAS: labeled attachment score, Time: seconds per sentence. For evaluation, we use the model trained with b = 80 and m = 0.88, which is the best setting found during development. Our parser shows higher accuracy than Zhang and Nivre (2011), which is the current state-of-the-art transition-based parser that uses beam search. Bohnet and Nivre (2012)’s transition-based system jointly performs POS tagging and dependency parsing, which shows higher accuracy than ours. Our parser gives a comparative accuracy to Koo and Collins (2010) that is a 3rdorder graph-based parsing approach. In terms of speed, our parser outperforms all other transitionbased parsers; it takes about 9 milliseconds per 7Dhillon et al. (2012) and Rush and Petrov (2012) also have shown good results on this data but they are excluded from our comparison because they use different kinds of constituent-to-dependency conversion methods. 1058 Approach Danish Dutch Slovene Swedish LAS UAS LAS UAS LAS UAS LAS UAS Nivre et al. (2006) 84.77 89.80 78.59 81.35 70.30 78.72 84.58 89.50 McDonald et al. (2006) 84.79 90.58 79.19 83.57 73.44 83.17 82.55 88.93 Nivre (2009) 84.2 75.2 F.-González and G.-Rodríguez (2012) 85.17 90.10 83.55 89.30 Nivre and McDonald (2008) 86.67 81.63 75.94 84.66 Martins et al. (2010) 91.50 84.91 85.53 89.80 bt = 80, bd = 1, m = 0.88 86.75 91.04 80.75 83.59 75.66 83.29 86.32 91.12 bt = 80, bd = 80, m = 0.88 87.27 91.36 82.45 85.33 77.46 84.65 86.80 91.36 Table 5: Parsing accuracies on four languages with non-projective dependencies, excluding punctuation. sentence using the beam size of 80. Our parser is implemented in Java and tested on an Intel Xeon 2.57GHz. Note that we do not include input/output time for our speed comparison. For a proof of concept, we run the same model, trained with bt = 80, but decode with different beam sizes using the same margin. Surprisingly, our parser gives the same accuracy (0.01% higher for labeled attachment score) on this data even with bd = 16. More importantly, bd = 16 shows about the same parsing speed as bd = 80, which indicates that selectional branching automatically reduced down the beam size by estimating low confidence predictions, so even if we assigned a larger beam size for decoding, it would have performed as efficiently. This implies that we no longer need to be so conscious about the beam size during decoding. Another interesting part is that (bt = 80, bd = 1) shows higher accuracy than (bt = 1, bd = 1); this implies that our training method of bootstrapping transition sequences can improve even a greedy parser. Notice that our greedy parser shows higher accuracy than many other greedy parsers (Hall et al., 2006; Goldberg and Elhadad, 2010) because it uses the non-local features of Zhang and Nivre (2011) and the bootstrapping technique of Choi and Palmer (2011) that had not been used for most other greedy parsing approaches. 4.5 Non-projective parsing experiments Table 5 shows comparison between state-of-the-art parsers and our approach for four languages with non-projective dependencies. Nivre et al. (2006) uses a pseudo-projective transition-based parsing approach. McDonald et al. (2006) uses a 2nd-order maximum spanning tree approach. Nivre (2009) and Fernández-González and Gómez-Rodríguez (2012) use different non-projective transition-based parsing approaches. Nivre and McDonald (2008) uses an ensemble model between transition-based and graph-based parsing approaches. Martins et al. (2010) uses integer linear programming for the optimization of their parsing model. Some of these approaches use greedy parsers, so we include our results from models using (bt = 80, bd = 1, m = 0.88), which finds only the one-best sequences during decoding although it is trained on multiple transition sequences (see Section 4.4). Our parser shows higher accuracies for most languages except for unlabeled attachment scores in Danish and Slovene. Our greedy approach outperforms both Nivre (2009) and Fernández-González and Gómez-Rodríguez (2012) who use different nonprojective parsing algorithms. 60 0 10 20 30 40 50 130 0 20 40 60 80 100 Sentence length Transitions Figure 6: The # of transitions performed during decoding with respect to sentence lengths for Dutch. Figure 6 shows the number of transitions performed during decoding with respect to sentence lengths for Dutch using bd = 1. Our parser still shows a linear growth in transition during decoding. 5 Related work Our parsing algorithm is most similar to Choi and Palmer (2011) who integrated our LEFT-REDUCE transition into Nivre’s list-based algorithm. Our algorithm is distinguished from theirs because ours gives different parsing complexities of O(n) and O(n2) for projective and non-projective parsing, respectively, whereas their algorithm gives O(n2) 1059 for both cases; this is possible because of the new integration of the RIGHT-SHIFT and NO-REDUCE transitions. There are other transition-based dependency parsing algorithms that take a similar approach; Nivre (2009) integrated a SWAP transition into Nivre’s arc-standard algorithm (Nivre, 2004) and Fernández-González and Gómez-Rodríguez (2012) integrated a buffer transition into Nivre’s arc-eager algorithm to handle non-projectivity. Our selectional branching method is most relevant to Zhang and Clark (2008) who introduced a transition-based dependency parsing model that uses beam search. Huang and Sagae (2010) later applied dynamic programming to this approach and showed improved efficiency. Zhang and Nivre (2011) added non-local features to this approach and showed improved parsing accuracy. Bohnet and Nivre (2012) introduced a transition-based system that jointly performed POS tagging and dependency parsing. Our work is distinguished from theirs because we use selectional branching instead. 6 Conclusion We present selectional branching that uses confidence estimates to decide when to employ a beam. Coupled with our new hybrid parsing algorithm, ADAGRAD, rich non-local features, and bootstrapping, our parser gives higher parsing accuracy than most other transition-based dependency parsers in multiple languages and shows faster parsing speed. It is interesting to see that our greedy parser outperformed most other greedy dependency parsers. This is because our parser used both bootstrapping and Zhang and Nivre (2011)’s non-local features, which had not been used by other greedy parsers. In the future, we will experiment with more advanced dependency representations (de Marneffe and Manning, 2008; Choi and Palmer, 2012b) to show robustness of our approach. Furthermore, we will evaluate individual methods of our approach separately to show impact of each method on parsing performance. We also plan to implement the typical beam search approach to make a direct comparison to our selectional branching.8 Acknowledgments Special thanks are due to Luke Vilnis of the University of Massachusetts Amherst for insights on 8Our parser is publicly available under an open source project, ClearNLP (clearnlp.googlecode.com). the ADAGRAD derivation. We gratefully acknowledge a grant from the Defense Advanced Research Projects Agency (DARPA) under the DEFT project, solicitation #: DARPA-BAA-12-47. References Bernd Bohnet and Joakim Nivre. 2012. A TransitionBased System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP’12, pages 1455–1465. Sabine Buchholz and Erwin Marsi. 2006. CoNLLX shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL’06, pages 149–164. Xavier Carreras, Michael Collins, and Terry Koo. 2008. TAG, Dynamic Programming, and the Perceptron for Efficient, Feature-rich Parsing. In Proceedings of the 12th Conference on Computational Natural Language Learning, CoNLL’08, pages 9–16. Daniel Cer, Marie-Catherine de Marneffe, Daniel Jurafsky, and Christopher D. Manning. 2010. Parsing to Stanford Dependencies: Trade-offs between speed and accuracy. In Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC’10. Jinho D. Choi and Martha Palmer. 2011. Getting the Most out of Transition-based Dependency Parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL:HLT’11, pages 687– 692. Jinho D. Choi and Martha Palmer. 2012a. Fast and Robust Part-of-Speech Tagging Using Dynamic Model Selection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, ACL’12, pages 363–367. Jinho D. Choi and Martha Palmer. 2012b. Guidelines for the Clear Style Constituent to Dependency Conversion. Technical Report 01-12, University of Colorado Boulder. Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of the conference on Empirical methods in natural language processing, EMNLP’02, pages 1–8. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Proceedings of the COLING workshop on Cross-Framework and Cross-Domain Parser Evaluation. 1060 Paramveer S. Dhillon, Jordan Rodu, Michael Collins, Dean P. Foster, and Lyle H. Ungar. 2012. Spectral Dependency Parsing with Latent Variables. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP’12, pages 205–213. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. The Journal of Machine Learning Research, 12(39):2121–2159. Daniel Fernández-González and Carlos GómezRodríguez. 2012. Improving Transition-Based Dependency Parsing with Buffer Transitions. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP’12, pages 308–319. Yoav Goldberg and Michael Elhadad. 2010. An Efficient Algorithm for Easy-First Non-Directional Dependency Parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT:NAACL’10, pages 742–750. Yoav Goldberg and Joakim Nivre. 2012. A Dynamic Oracle for Arc-Eager Dependency Parsing. In Proceedings of the 24th International Conference on Computational Linguistics, COLING’12. Johan Hall, Joakim Nivre, and Jens Nilsson. 2006. Discriminative Classifiers for Deterministic Dependency Parsing. In In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, COLING-ACL’06, pages 316– 323. Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. 2008. A Dual Coordinate Descent Method for Large-scale Linear SVM. In Proceedings of the 25th international conference on Machine learning, ICML’08, pages 408–415. Liang Huang and Kenji Sagae. 2010. Dynamic Programming for Linear-Time Incremental Parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL’10. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured Perceptron with Inexact Search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT’12, pages 142–151. Terry Koo and Michael Collins. 2010. Efficient Thirdorder Dependency Parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL’10. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Parsing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, ACL:HLT’08, pages 595–603. Sandra Kübler, Ryan T. McDonald, and Joakim Nivre. 2009. Dependency Parsing. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. André F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and Mário A. T. Figueiredo. 2010. Turbo Parsers: Dependency Parsing by Approximate Variational Inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP’10, pages 34–44. Ryan Mcdonald and Fernando Pereira. 2006. Online Learning of Approximate Dependency Parsing Algorithms. In Proceedings of the Annual Meeting of the European American Chapter of the Association for Computational Linguistics, EACL’06, pages 81–88. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 91–98. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual Dependency Analysis with a Two-Stage Discriminative Parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL’06, pages 216–220. Joakim Nivre and Ryan McDonald. 2008. Integrating Graph-based and Transition-based Dependency Parsers. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL:HLT’08, pages 950–958. Joakim Nivre and Jens Nilsson. 2005. PseudoProjective Dependency Parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, ACL’05, pages 99–106. Joakim Nivre, Johan Hall, Jens Nilsson, Gül¸sen Eryiˇgit, and Svetoslav Marinov. 2006. Labeled pseudoprojective dependency parsing with support vector machines. In Proceedings of the 10th Conference on Computational Natural Language Learning, CoNLL’06, pages 221–225. Joakim Nivre. 2003. An Efficient Algorithm for Projective Dependency Parsing. In Proceedings of the 8th International Workshop on Parsing Technologies, IWPT’03, pages 149–160. 1061 Joakim Nivre. 2004. Incrementality in Deterministic Dependency Parsing. In Proceedings of the ACL’04 Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50–57. Joakim Nivre. 2006. Inductive Dependency Parsing. Springer. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Joakim Nivre. 2009. Non-Projective Dependency Parsing in Expected Linear Time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, ACLIJCNLP’09, pages 351–359. Alexander M. Rush and Slav Petrov. 2012. Vine Pruning for Efficient Multi-Pass Dependency Parsing. In Proceedings of the 12th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL:HLT’12. Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP’10, pages 1–11. Kenji Sagae and Alon Lavie. 2006. Parser Combination by Reparsing. In In Proceedings of the Human Language Technology Conference of the NAACL, NAACL’06, pages 129–132. Jun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An Empirical Study of Semi-supervised Structured Conditional Models for Dependency Parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP’09, pages 551–560. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machine. In Proceedings of the 8th International Workshop on Parsing Technologies, IWPT’03, pages 195– 206. Yue Zhang and Stephen Clark. 2008. A Tale of Two Parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP’08, pages 562–571. Hao Zhang and Ryan McDonald. 2012. Generalized Higher-Order Dependency Parsing with Cube Pruning. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL’12, pages 320–331. Yue Zhang and Joakim Nivre. 2011. Transition-based Dependency Parsing with Rich Non-local Features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL’11, pages 188–193. 1062
2013
104
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1063–1072, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Bilingually-Guided Monolingual Dependency Grammar Induction Kai Liu†§, Yajuan L¨u†, Wenbin Jiang†, Qun Liu‡† †Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences P.O. Box 2704, Beijing 100190, China {liukai,lvyajuan,jiangwenbin,liuqun}@ict.ac.cn ‡Centre for Next Generation Localisation Faculty of Engineering and Computing, Dublin City University [email protected] §University of Chinese Academy of Sciences Abstract This paper describes a novel strategy for automatic induction of a monolingual dependency grammar under the guidance of bilingually-projected dependency. By moderately leveraging the dependency information projected from the parsed counterpart language, and simultaneously mining the underlying syntactic structure of the language considered, it effectively integrates the advantages of bilingual projection and unsupervised induction, so as to induce a monolingual grammar much better than previous models only using bilingual projection or unsupervised induction. We induced dependency grammar for five different languages under the guidance of dependency information projected from the parsed English translation, experiments show that the bilinguallyguided method achieves a significant improvement of 28.5% over the unsupervised baseline and 3.0% over the best projection baseline on average. 1 Introduction In past decades supervised methods achieved the state-of-the-art in constituency parsing (Collins, 2003; Charniak and Johnson, 2005; Petrov et al., 2006) and dependency parsing (McDonald et al., 2005a; McDonald et al., 2006; Nivre et al., 2006; Nivre et al., 2007; Koo and Collins, 2010). For supervised models, the human-annotated corpora on which models are trained, however, are expensive and difficult to build. As alternative strategies, methods which utilize raw texts have been investigated recently, including unsupervised methods which use only raw texts (Klein and Manning, 2004; Smith and Eisner, 2005; William et al., 2009), and semi-supervised methods (Koo et al., 2008) which use both raw texts and annotated corpus. And there are a lot of efforts have also been devoted to bilingual projection (Chen et al., 2010), which resorts to bilingual text with one language parsed, and projects the syntactic information from the parsed language to the unparsed one (Hwa et al., 2005; Ganchev et al., 2009). In dependency grammar induction, unsupervised methods achieve continuous improvements in recent years (Klein and Manning, 2004; Smith and Eisner, 2005; Bod, 2006; William et al., 2009; Spitkovsky et al., 2010). Relying on a predefined distributional assumption and iteratively maximizing an approximate indicator (entropy, likelihood, etc.), an unsupervised model usually suffers from two drawbacks, i.e., lower performance and higher computational cost. On the contrary, bilingual projection (Hwa et al., 2005; Smith and Eisner, 2009; Jiang and Liu, 2010) seems a promising substitute for languages with a large amount of bilingual sentences and an existing parser of the counterpart language. By projecting syntactic structures directly (Hwa et al., 2005; Smith and Eisner, 2009; Jiang and Liu, 2010) across bilingual texts or indirectly across multilingual texts (Snyder et al., 2009; McDonald et al., 2011; Naseem et al., 2012), a better dependency grammar can be easily induced, if syntactic isomorphism is largely maintained between target and source languages. Unsupervised induction and bilingual projection run according to totally different principles, the former mines the underlying structure of the monolingual language, while the latter leverages the syntactic knowledge of the parsed counter1063 Bilingual corpus Joint Optimization Bilingually-guided Parsing model Unsupervised objective Projection objective Random Treebank Evolved treebank Target sentences Source sentences projection Figure 1: Training the bilingually-guided parsing model by iteration. part language. Considering this, we propose a novel strategy for automatically inducing a monolingual dependency grammar under the guidance of bilingually-projected dependency information, which integrates the advantage of bilingual projection into the unsupervised framework. A randomly-initialized monolingual treebank evolves in a self-training iterative procedure, and the grammar parameters are tuned to simultaneously maximize both the monolingual likelihood and bilingually-projected likelihood of the evolving treebank. The monolingual likelihood is similar to the optimization objectives of conventional unsupervised models, while the bilinguallyprojected likelihood is the product of the projected probabilities of dependency trees. By moderately leveraging the dependency information projected from the parsed counterpart language, and simultaneously mining the underlying syntactic structure of the language considered, we can automatically induce a monolingual dependency grammar which is much better than previous models only using bilingual projection or unsupervised induction. In addition, since both likelihoods are fundamentally factorized into dependency edges (of the hypothesis tree), the computational complexity approaches to unsupervised models, while with much faster convergence. We evaluate the final automatically-induced dependency parsing model on 5 languages. Experimental results show that our method significantly outperforms previous work based on unsupervised method or indirect/direct dependency projection, where we see an average improvement of 28.5% over unsupervised baseline on all languages, and the improvements are 3.9%/3.0% over indirect/direct baselines. And our model achieves the most significant gains on Chinese, where the improvements are 12.0%, 4.5% over indirect and direct projection baselines respectively. In the rest of the paper, we first describe the unsupervised dependency grammar induction framework in section 2 (where the unsupervised optimization objective is given), and introduce the bilingual projection method for dependency parsing in section 3 (where the projected optimization objective is given); Then in section 4 we present the bilingually-guided induction strategy for dependency grammar (where the two objectives above are jointly optimized, as shown in Figure 1). After giving a brief introduction of previous work in section 5, we finally give the experimental results in section 6 and conclude our work in section 7. 2 Unsupervised Dependency Grammar Induction In this section, we introduce the unsupervised objective and the unsupervised training algorithm which is used as the framework of our bilinguallyguided method. Unlike previous unsupervised work (Klein and Manning, 2004; Smith and Eisner, 2005; Bod, 2006), we select a self-training approach (similar to hard EM method) to train the unsupervised model. And the framework of our unsupervised model builds a random treebank on the monolingual corpus firstly for initialization and trains a discriminative parsing model on it. Then we use the parser to build an evolved treebank with the 1-best result for the next iteration run. In this way, the parser and treebank evolve in an iterative way until convergence. Let’s introduce the parsing objective firstly: Define ei as the ith word in monolingual sentence E; deij denotes the word pair dependency relationship (ei →ej). Based on the features around deij, we can calculate the probability Pr(y|deij) that the word pair deij can form a dependency arc 1064 as: Pr(y|deij) = 1 Z(deij )exp( X n λn · fn(deij, y)) (1) where y is the category of the relationship of deij: y = + means it is the probability that the word pair deij can form a dependency arc and y = − means the contrary. λn denotes the weight for feature function fn(deij, y), and the features we used are presented in Table 1 (Section 6). Z(deij) is a normalizing constant: Z(deij) = X y exp( X n λn · fn(deij, y)) (2) Given a sentence E, parsing a dependency tree is to find a dependency tree DE with maximum probability PE: PE = arg max DE Y deij ∈DE Pr(+|deij) (3) 2.1 Unsupervised Objective We select a simple classifier objective function as the unsupervised objective function which is instinctively in accordance with the parsing objective: θ(λ) = Y de∈DE Pr(+|de) Y de∈e DE Pr(−|de) (4) where E is the monolingual corpus and E ∈E, DE is the treebank that contains all DE in the corpus, and eDE denotes all other possible dependency arcs which do not exist in the treebank. Maximizing the Formula (4) is equivalent to maximizing the following formula: θ1(λ) = X de∈DE log Pr(+|de) + X de∈e DE log Pr(−|de) (5) Since the size of edges between DE and eDE is disproportionate, we use an empirical value to reduce the impact of the huge number of negative instances: θ2(λ) = X de∈DE log Pr(+|de) + |DE| | eDE| X de∈e DE log Pr(−|de) (6) where |x| is the size of x. Algorithm 1 Training unsupervised model 1: build random DE 2: λ ←train(DE, eDE) 3: repeat 4: for each E ∈E do ⊲E step 5: DE ←parse(E,λ) 6: λ ←train(DE, eDE) ⊲M step 7: until convergence Bush held talk with Sharon a bushi yu juxing shalong huitan le Ă Ă ᏗҔ Ϣ В㸠 ≭啭 Ӯ䇜 њ Figure 2: Projecting a Chinese dependency tree to English side according to DPA. Solid arrows are projected dependency arcs; dashed arrows are missing dependency arcs. 2.2 Unsupervised Training Algorithm Algorithm 1 outlines the unsupervised training in its entirety, where the treebank DE and unsupervised parsing model with λ are updated iteratively. In line 1 we build a random treebank DE on the monolingual corpus, and then train the parsing model with it (line 2) through a training procedure train(·, ·) which needs DE and eDE as classification instances. From line 3-7, we train the unsupervised model in self training iterative procedure, where line 4-5 are similar to the E-step in EM algorithm where calculates objective instead of expectation of 1-best tree (line 5) which is parsed according to the parsing objective (Formula 3) by parsing process parse(·, ·), and update the tree bank with the tree. Similar to M-step in EM, the algorithm maximizes the whole treebank’s unsupervised objective (Formula 6) through the training procedure (line 6). 3 Bilingual Projection of Dependency Grammar In this section, we introduce our projection objective and training algorithm which trains the model with arc instances. Because of the heterogeneity between different languages and word alignment errors, projection methods may contain a lot of noises. Take Figure 2 as an example, following the Direct Projection Algorithm (DPA) (Hwa et al., 2005) (Section 5), the dependency relationships between words can be directly projected from the source 1065 Algorithm 2 Training projection model 1: DP , DN ←proj(F, DF , A, E) 2: repeat ⊲train(DP , DN) 3: ∇φ ←grad(DP, DN, φ(λ)) 4: λ ←climb(φ, ∇φ, λ) 5: until maximization language to the target language. Therefore, we can hardly obtain a treebank with complete trees through direct projection. So we extract projected discrete dependency arc instances instead of treebank as training set for the projected grammar induction model. 3.1 Projection Objective Correspondingly, we select an objective which has the same form with the unsupervised one: φ(λ) = X de∈DP log Pr(+|de) + X de∈DN log Pr(−|de) (7) where DP is the positive dependency arc instance set, which is obtained by direct projection methods (Hwa et al., 2005; Jiang and Liu, 2010) and DN is the negative one. 3.2 Projection Algorithm Basically, the training procedure in line 2,7 of Algorithm 1 can be divided into smaller iterative steps, and Algorithm 2 outlines the training step of projection model with instances. F in Algorithm 2 is source sentences in bilingual corpus, and A is the alignments. Function grad(·, ·, ·) gives the gradient (∇φ) and the objective is optimized with a generic optimization step (such as an LBFGS iteration (Zhu et al., 1997)) in the subroutine climb(·, ·, ·). 4 Bilingually-Guided Dependency Grammar Induction This section presents our bilingually-guided grammar induction model, which incorporates unsupervised framework and bilingual projection model through a joint approach. According to following observation: unsupervised induction model mines underlying syntactic structure of the monolingual language, however, it is hard to find good grammar induction in the exponential parsing space; bilingual projection obtains relatively reliable syntactic knowledge of the parsed counterpart, but it possibly contains a lot of noises (e.g. Figure 2). We believe that unsupervised model and projection model can complement each other and a joint model which takes better use of both unsupervised parse trees and projected dependency arcs can give us a better parser. Based on the idea, we propose a novel strategy for training monolingual grammar induction model with the guidance of unsupervised and bilingually-projected dependency information. Figure 1 outlines our bilingual-guided grammar induction process in its entirety. In our method, we select compatible objectives for unsupervised and projection models, in order to they can share the same grammar parameters. Then we incorporate projection model into our iterative unsupervised framework, and jointly optimize unsupervised and projection objectives with evolving treebank and constant projection information respectively. In this way, our bilingually-guided model’s parameters are tuned to simultaneously maximizing both monolingual likelihood and bilingually-projected likelihood by 4 steps: 1. Randomly build treebank on target sentences for initialization, and get the projected arc instances through projection from bitext. 2. Train the bilingually-guided grammar induction model by multi-objective optimization method with unsupervised objective and projection objective on treebank and projected arc instances respectively. 3. Use the parsing model to build new treebank on target language for next iteration. 4. Repeat steps 1, 2 and 3 until convergence. The unsupervised objective is optimized by the loop—”tree bank→optimized model→new tree bank”. The treebank is evolved for runs. The unsupervised model gets projection constraint implicitly from those parse trees which contain information from projection part. The projection objective is optimized by the circulation—”projected instances→optimized model”, these projected instances will not change once we get them. The iterative procedure proposed here is not a co-training algorithm (Sarkar, 2001; Hwa et al., 2003), because the input of the projection objective is static. 1066 4.1 Joint Objective For multi-objective optimization method, we employ the classical weighted-sum approach which just calculates the weighted linear sum of the objectives: OBJ = X m weightmobjm (8) We combine the unsupervised objective (Formula (6)) and projection objective (Formula (7)) together through the weighted-sum approach in Formula (8): ℓ(λ) = αθ2(λ) + (1 −α)φ(λ) (9) where ℓ(λ) is our weight-sum objective. And α is a mixing coefficient which reflects the relative confidence between the unsupervised and projection objectives. Equally, α and (1−α) can be seen as the weights in Formula (8). In that case, we can use a single parameter α to control both weights for different objective functions. When α = 1 it is the unsupervised objective function in Formula (6). Contrary, if α = 0, it is the projection objective function (Formula (7)) for projected instances. With this approach, we can optimize the mixed parsing model by maximizing the objective in Formula (9). Though the function (Formula (9)) is an interpolation function, we use it for training instead of parsing. In the parsing procedure, our method calculates the probability of a dependency arc according to the Formula (2), while the interpolating method calculates it by: Pr(y|deij) =αPr1(y|deij) + (1 −α)Pr2(y|deij) (10) where Pr1(y|deij) and Pr2(y|deij) are the probabilities provided by different models. 4.2 Training Algorithm We optimize the objective (Formula (9)) via a gradient-based search algorithm. And the gradient with respect to λk takes the form: ∇ℓ(λk) = α∂θ2(λ) ∂λk + (1 −α)∂φ(λ) ∂λk (11) Algorithm 3 outlines our joint training procedure, which tunes the grammar parameter λ simultaneously maximize both unsupervised objective Algorithm 3 Training joint model 1: DP , DN ←proj(F, DF , A, E) 2: build random DE 3: λ ←train(DP , DN) 4: repeat 5: for each E ∈E do ⊲E step 6: DE ←parse(E,λ) 7: ∇ℓ(λ) ←grad(DE, eDE, DP , DN, ℓ(λ)) 8: λ ←climb(ℓ(λ), ∇ℓ(λ), λ) ⊲M step 9: until convergence and projection objective. And it incorporates unsupervised framework and projection model algorithm together. It is grounded on the work which uses features in the unsupervised model (BergKirkpatrick et al., 2010). In line 1, 2 we get projected dependency instances from source side according to projection methods and build a random treebank (step 1). Then we train an initial model with projection instances in line 3. From line 4-9, the objective is optimized with a generic optimization step in the subroutine climb(·, ·, ·, ·, ·). For each sentence we parse its dependency tree, and update the tree into the treebank (step 3). Then we calculate the gradient and optimize the joint objective according to the evolved treebank and projected instances (step 2). Lines 5-6 are equivalent to the E-step of the EM algorithm, and lines 7-8 are equivalent to the M-step. 5 Related work The DMV (Klein and Manning, 2004) is a singlestate head automata model (Alshawi, 1996) which is based on POS tags. And DMV learns the grammar via inside-outside re-estimation (Baker, 1979) without any smoothing, while Spitkovsky et al. (2010) utilizes smoothing and learning strategy during grammar learning and William et al. (2009) improves DMV with richer context. The dependency projection method DPA (Hwa et al., 2005) based on Direct Correspondence Assumption (Hwa et al., 2002) can be described as: if there is a pair of source words with a dependency relationship, the corresponding aligned words in target sentence can be considered as having the same dependency relationship equivalently (e.g. Figure 2). The Word Pair Classification (WPC) method (Jiang and Liu, 2010) modifies the DPA method and makes it more robust. Smith and Eisner (2009) propose an adaptation method founded on quasi-synchronous grammar features 1067 Type Feature Template Unigram wordi posi wordi ◦posi wordj posj wordj ◦posj Bigram wordi ◦posj wordj ◦posi posi ◦posj wordi ◦wordj wordi ◦posi ◦wordj wordi ◦wordj ◦posj wordi ◦posi ◦posj posi ◦wordj ◦posj wordi ◦posi ◦wordj ◦posj Surrounding posi−1 ◦posi ◦posj posi ◦posi+1 ◦posj posi ◦posj−1 ◦posj posi ◦posj ◦posj+1 posi−1 ◦posi ◦posj−1 posi ◦posi+1 ◦posj+1 posi−1 ◦posj−1 ◦posj posi+1 ◦posj ◦posj+1 posi−1 ◦posi ◦posj+1 posi ◦posi+1 ◦posj−1 posi−1 ◦posj ◦posj+1 posi+1 ◦posj−1 ◦posj posi−1 ◦posi ◦posj−1 ◦posj posi ◦posi+1 ◦posj ◦posj+1 posi ◦posi+1 ◦posj−1 ◦posj posi−1 ◦posi ◦posj ◦posj+1 Table 1: Feature templates for dependency parsing. For edge deij: wordi is the parent word and wordj is the child word, similar to ”pos”. ”+1” denotes the preceding token of the sentence, similar to ”-1”. for dependency projection and annotation, which requires a small set of dependency annotated corpus of target language. Similarly, using indirect information from multilingual (Cohen et al., 2011; T¨ackstr¨om et al., 2012) is an effective way to improve unsupervised parsing. (Zeman and Resnik, 2008; McDonald et al., 2011; Søgaard, 2011) employ non-lexicalized parser trained on other languages to process a target language. McDonald et al. (2011) adapts their multi-source parser according to DCA, while Naseem et al. (2012) selects a selective sharing model to make better use of grammar information in multi-sources. Due to similar reasons, many works are devoted to POS projection (Yarowsky et al., 2001; Shen et al., 2007; Naseem et al., 2009), and they also suffer from similar problems. Some seek for unsupervised methods, e.g. Naseem et al. (2009), and some further improve the projection by a graphbased projection (Das and Petrov, 2011). Our model differs from the approaches above in its emphasis on utilizing information from both sides of bilingual corpus in an unsupervised training framework, while most of the work above only utilize the information from a single side. 6 Experiments In this section, we evaluate the performance of the MST dependency parser (McDonald et al., 2005b) which is trained by our bilingually-guided model on 5 languages. And the features used in our experiments are summarized in Table 1. 6.1 Experiment Setup Datasets and Evaluation Our experiments are run on five different languages: Chinese(ch), Danish(da), Dutch(nl), Portuguese(pt) and Swedish(sv) (da, nl, pt and sv are free data sets distributed for the 2006 CoNLL Shared Tasks (Buchholz and Marsi, 2006)). For all languages, we only use English-target parallel data: we take the FBIS English-Chinese bitext as bilingual corpus for English-Chinese dependency projection which contains 239K sentence pairs with about 8.9M/6.9M words in English/Chinese, and for other languages we use the readily available data in the Europarl corpus. Then we run tests on the Penn Chinese Treebank (CTB) and CoNLL-X test sets. English sentences are tagged by the implementations of the POS tagger of Collins (2002), which is trained on WSJ. The source sentences are then parsed by an implementation of 2nd-ordered MST model of McDonald and Pereira (2006), which is trained on dependency trees extracted from Penn Treebank. As the evaluation metric, we use parsing accuracy which is the percentage of the words which have found their correct parents. We evaluate on sentences with all length for our method. Training Regime In experiments, we use the projection method proposed by Jiang and Liu (2010) to provide the projection instances. And we train the projection part α = 0 first for initialization, on which the whole model will be trained. Availing of the initialization method, the model can converge very fast (about 3 iterations is sufficient) and the results are more stable than the ones trained on random initialization. Baselines We compare our method against three kinds of different approaches: unsupervised method (Klein and Manning, 2004); singlesource direct projection methods (Hwa et al., 2005; Jiang and Liu, 2010); multi-source indirect projection methods with multi-sources (M1068 60.0 61.5 ch 50.3 51.2 da 59.5 60.5 accuracy% nl 70.5 74.5 pt 61.5 65.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 alpha sv Figure 3: The performance of our model with respect to a series of ratio α cDonald et al., 2011; Naseem et al., 2012). 6.2 Results We test our method on CTB and CoNLL-X free test data sets respectively, and the performance is summarized in Table 2. Figure 3 presents the performance with different α on different languages. Compare against Unsupervised Baseline Experimental results show that our unsupervised framework’s performance approaches to the DMV method. And the bilingually-guided model can promote the unsupervised method consistency over all languages. On the best results’ average of four comparable languages (da, nl, pt, sv), the promotion gained by our model is 28.5% over the baseline method (DMV) (Klein and Manning, 2004). Compare against Projection Baselines For all languages, the model consistently outperforms on direct projection baseline. On the average of each language’s best result, our model outperforms all kinds of baselines, yielding 3.0% gain over the single-source direct-projection method (Jiang and Liu, 2010) and 3.9% gain over the multi-source indirect-projection method (McDonald et al., 2011). On the average of all results with different parameters, our method also gains more than 2.0% improvements on all baselines. Particularly, our model achieves the most significant gains on Chinese, where the improvements are 4.5%/12.0% on direct/indirect projection baseAccuracy% Model ch da nl pt sv avg DMV 42.5∗ 33.4 38.5 20.1 44.0 —.– DPA 53.9 —.– —.– —.– —.– —.– WPC 56.8 50.1 58.4 70.5 60.8 59.3 Transfer 49.3 49.5 53.9 75.8 63.6 58.4 Selective 51.2 —.– 55.9 73.5 61.5 —.– unsuper 22.6 41.6 15.2 45.7 42.4 33.5 avg 61.0 50.7 59.9 72.0 63.1 61.3 max 61.3 51.1 60.1 74.2 64.6 62.3 Table 2: The directed dependency accuracy with different parameter of our model and the baselines. The first section of the table (row 3-7) shows the results of the baselines: a unsupervised method baseline (Klein and Manning, 2004)(DMV); a single-source projection method baseline (Hwa et al., 2005) (DPA) and its improvement (Jiang and Liu, 2010)(WPC); two multisource baselines (McDonald et al., 2011)(Transfer) and (Naseem et al., 2012)(Selective). The second section of the table (row 8) presents the result of our unsupervised framework (unsuper). The third section gives the mean value (avg) and maximum value (max) of our model with different α in Figure 3. *: The result is based on sentences with 10 words or less after the removal of punctuation, it is an incomparable result. lines. The results in Figure 3 prove that our unsupervised framework α = 1 can promote the grammar induction if it has a good start (well initialization), and it will be better once we incorporate the information from the projection side (α = 0.9). And the maximum points are not in α = 1, which implies that projection information is still available for the unsupervised framework even if we employ the projection model as the initialization. So we suggest that a greater parameter is a better choice for our model. And there are some random factors in our model which make performance curves with more fluctuation. And there is just a little improvement shown in da, in which the same situation is observed by (McDonald et al., 2011). 6.3 Effects of the Size of Training Corpus To investigate how the size of the training corpus influences the result, we train the model on extracted bilingual corpus with varying sizes: 10K, 50K, 100K, 150K and 200K sentences pairs. As shown in Figure 4, our approach continu1069 53 54 55 56 57 58 59 60 61 62 63 10K 50K 100K 150K 200K accuracy% size of training set our model baseline Figure 4: Performance on varying sizes (average of 5 languages, α = 0.9) 51 52 53 54 55 56 57 58 59 60 61 62 63 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 accuracy% noise rate our model baseline Figure 5: Performance on different projection quality (average of 5 languages, α = 0.9). The noise rate is the percentage of the projected instances being messed up. ously outperforms the baseline with the increasing size of training corpus. It is especially noteworthy that the more training data is utilized the more superiority our model enjoys. That is, because our method not only utilizes the projection information but also avails itself of the monolingual corpus. 6.4 Effect of Projection Quality The projection quality can be influenced by the quality of the source parsing, alignments, projection methods, corpus quality and many other factors. In order to detect the effects of varying projection qualities on our approach, we simulate the complex projection procedure by messing up the projected instances randomly with different noise rates. The curves in Figure 5 show the performance of WPC baseline and our bilingual-guided method. For different noise rates, our model’s results consistently outperform the baselines. When the noise rate is greater than 0.2, our improvement 49.5 ... 54.6 ... 58.2 58.6 59.0 59.4 59.8 60.2 0 0.02 0.04 0.06 0.08 0.1 ... 0.2 ... 0.3 accuracy% alpha our model baseline(58.5) Figure 6: The performance curve of our model (random initialization) on Chinese, with respect to a series of ratio α. The baseline is the result of WPC model. increases with the growth of the noise rate. The result suggests that our method can solve some problems which are caused by projection noise. 6.5 Performance on Random Initialization We test our model with random initialization on different α. The curve in Figure 6 shows the performance of our model on Chinese. The results seem supporting our unsupervised optimization method when α is in the range of (0, 0.1). It implies that the unsupervised structure information is useful, but it seems creating a negative effect on the model when α is greater than 0.1. Because the unsupervised part can gain constraints from the projection part. But with the increase of α, the strength of constraint dwindles, and the unsupervised part will gradually lose control. And bad unsupervised part pulls the full model down. 7 Conclusion and Future Work This paper presents a bilingually-guided strategy for automatic dependency grammar induction, which adopts an unsupervised skeleton and leverages the bilingually-projected dependency information during optimization. By simultaneously maximizing the monolingual likelihood and bilingually-projected likelihood in the EM procedure, it effectively integrates the advantages of bilingual projection and unsupervised induction. Experiments on 5 languages show that the novel strategy significantly outperforms previous unsupervised or bilingually-projected models. Since its computational complexity approaches to the skeleton unsupervised model (with much fewer iterations), and the bilingual text aligned to 1070 resource-rich languages is easy to obtain, such a hybrid method seems to be a better choice for automatic grammar induction. It also indicates that the combination of bilingual constraint and unsupervised methodology has a promising prospect for grammar induction. In the future work we will investigate such kind of strategies, such as bilingually unsupervised induction. Acknowledgments The authors were supported by National Natural Science Foundation of China, Contracts 61202216, 863 State Key Project (No. 2011AA01A207), and National Key Technology R&D Program (No. 2012BAH39B03), Key Project of Knowledge Innovation Program of Chinese Academy of Sciences (No. KGZD-EW-501). Qun Liu’s work is partially supported by Science Foundation Ireland (Grant No.07/CE/I1142) as part of the CNGL at Dublin City University. We would like to thank the anonymous reviewers for their insightful comments and those who helped to modify the paper. References H. Alshawi. 1996. Head automata for speech translation. In Proc. of ICSLP. James K Baker. 1979. Trainable grammars for speech recognition. The Journal of the Acoustical Society of America, 65:S132. T. Berg-Kirkpatrick, A. Bouchard-Cˆot´e, J. DeNero, and D. Klein. 2010. Painless unsupervised learning with features. In HLT: NAACL, pages 582–590. Rens Bod. 2006. An all-subtrees approach to unsupervised parsing. In Proc. of the 21st ICCL and the 44th ACL, pages 865–872. S. Buchholz and E. Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proc. of the 2002 Conference on EMNLP. Proc. CoNLL. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proc. of the 43rd ACL, pages 173–180, Ann Arbor, Michigan, June. W. Chen, J. Kazama, and K. Torisawa. 2010. Bitext dependency parsing with bilingual subtree constraints. In Proc. of ACL, pages 21–29. S.B. Cohen, D. Das, and N.A. Smith. 2011. Unsupervised structure prediction with non-parallel multilingual guidance. In Proc. of the Conference on EMNLP, pages 50–61. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. of the 2002 Conference on EMNLP, pages 1–8, July. Michael Collins. 2003. Head-driven statistical models for natural language parsing. In Computational Linguistics. D. Das and S. Petrov. 2011. Unsupervised part-ofspeech tagging with bilingual graph-based projections. In Proc. of ACL. K. Ganchev, J. Gillenwater, and B. Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proc. of IJCNLP of the AFNLP: Volume 1-Volume 1, pages 369–377. R. Hwa, P. Resnik, A. Weinberg, and O. Kolak. 2002. Evaluating translational correspondence using annotation projection. In Proc. of ACL, pages 392–399. R. Hwa, M. Osborne, A. Sarkar, and M. Steedman. 2003. Corrected co-training for statistical parsers. In ICML-03 Workshop on the Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, Washington DC. R. Hwa, P. Resnik, A. Weinberg, C. Cabezas, and O. Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(3):311–325. W. Jiang and Q. Liu. 2010. Dependency parsing and projection based on word-pair classification. In Proc. of ACL, pages 12–20. D. Klein and C.D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proc. of ACL, page 478. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proc. of the 48th ACL, pages 1–11, July. T. Koo, X. Carreras, and M. Collins. 2008. Simple semi-supervised dependency parsing. pages 595– 603. R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proc. of the 11th Conf. of EACL. R. McDonald, K. Crammer, and F. Pereira. 2005a. Online large-margin training of dependency parsers. In Proc. of ACL, pages 91–98. R. McDonald, F. Pereira, K. Ribarov, and J. Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proc. of EMNLP, pages 523–530. R. McDonald, K. Lerman, and F. Pereira. 2006. Multilingual dependency analysis with a two-stage discriminative parser. In Proc. of CoNLL, pages 216– 220. 1071 R. McDonald, S. Petrov, and K. Hall. 2011. Multisource transfer of delexicalized dependency parsers. In Proc. of EMNLP, pages 62–72. ACL. T. Naseem, B. Snyder, J. Eisenstein, and R. Barzilay. 2009. Multilingual part-of-speech tagging: Two unsupervised approaches. Journal of Artificial Intelligence Research, 36(1):341–385. Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proc. of the 50th ACL, pages 629–637, July. J. Nivre, J. Hall, J. Nilsson, G. Eryi˜git, and S. Marinov. 2006. Labeled pseudo-projective dependency parsing with support vector machines. In Proc. of CoNLL, pages 221–225. J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. K¨ubler, S. Marinov, and E. Marsi. 2007. Maltparser: A language-independent system for datadriven dependency parsing. Natural Language Engineering, 13(02):95–135. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proc. of the 21st ICCL & 44th ACL, pages 433–440, July. A. Sarkar. 2001. Applying co-training methods to statistical parsing. In Proc. of NAACL, pages 1–8. L. Shen, G. Satta, and A. Joshi. 2007. Guided learning for bidirectional sequence classification. In Annual Meeting-, volume 45, page 760. N.A. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proc. of ACL, pages 354–362. D.A. Smith and J. Eisner. 2009. Parser adaptation and projection with quasi-synchronous grammar features. In Proc. of EMNLP: Volume 2-Volume 2, pages 822–831. B. Snyder, T. Naseem, and R. Barzilay. 2009. Unsupervised multilingual grammar induction. In Proc. of IJCNLP of the AFNLP: Volume 1-Volume 1, pages 73–81. Anders Søgaard. 2011. Data point selection for crosslanguage adaptation of dependency parsers. In Proc. of the 49th ACL: HLT, pages 682–686. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010. From baby steps to leapfrog: How “less is more” in unsupervised dependency parsing. In HLT: NAACL, pages 751–759, June. O. T¨ackstr¨om, R. McDonald, and J. Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. William, M. Johnson, and D. McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proc. of NAACL, pages 101–109. D. Yarowsky, G. Ngai, and R. Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proc. of HLT, pages 1–8. Daniel Zeman and Philip Resnik. 2008. Crosslanguage parser adaptation between related languages. In Proc. of the IJCNLP-08. Proc. CoNLL. Ciyou Zhu, Richard H Byrd, Peihuang Lu, and Jorge Nocedal. 1997. Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software (TOMS), 23(4):550–560. 1072
2013
105
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1073–1082, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Joint Word Alignment and Bilingual Named Entity Recognition Using Dual Decomposition Mengqiu Wang Stanford University Stanford, CA 94305 [email protected] Wanxiang Che Harbin Institute of Technology Harbin, China, 150001 [email protected] Christopher D. Manning Stanford University Stanford, CA 94305 [email protected] Abstract Translated bi-texts contain complementary language cues, and previous work on Named Entity Recognition (NER) has demonstrated improvements in performance over monolingual taggers by promoting agreement of tagging decisions between the two languages. However, most previous approaches to bilingual tagging assume word alignments are given as fixed input, which can cause cascading errors. We observe that NER label information can be used to correct alignment mistakes, and present a graphical model that performs bilingual NER tagging jointly with word alignment, by combining two monolingual tagging models with two unidirectional alignment models. We introduce additional cross-lingual edge factors that encourage agreements between tagging and alignment decisions. We design a dual decomposition inference algorithm to perform joint decoding over the combined alignment and NER output space. Experiments on the OntoNotes dataset demonstrate that our method yields significant improvements in both NER and word alignment over state-of-the-art monolingual baselines. 1 Introduction We study the problem of Named Entity Recognition (NER) in a bilingual context, where the goal is to annotate parallel bi-texts with named entity tags. This is a particularly important problem for machine translation (MT) since entities such as person names, locations, organizations, etc. carry much of the information expressed in the source sentence. Recognizing them provides useful information for phrase detection and word sense disambiguation (e.g., “melody” as in a female name has a different translation from the word “melody” in a musical sense), and can be directly leveraged to improve translation quality (Babych and Hartley, 2003). We can also automatically construct a named entity translation lexicon by annotating and extracting entities from bi-texts, and use it to improve MT performance (Huang and Vogel, 2002; Al-Onaizan and Knight, 2002). Previous work such as Burkett et al. (2010b), Li et al. (2012) and Kim et al. (2012) have also demonstrated that bitexts annotated with NER tags can provide useful additional training sources for improving the performance of standalone monolingual taggers. Because human translation in general preserves semantic equivalence, bi-texts represent two perspectives on the same semantic content (Burkett et al., 2010b). As a result, we can find complementary cues in the two languages that help to disambiguate named entity mentions (Brown et al., 1991). For example, the English word “Jordan” can be either a last name or a country. Without sufficient context it can be difficult to distinguish the two; however, in Chinese, these two senses are disambiguated: “乔丹” as a last name, and “约旦” as a country name. In this work, we first develop a bilingual NER model (denoted as BI-NER) by embedding two monolingual CRF-based NER models into a larger undirected graphical model, and introduce additional edge factors based on word alignment (WA). Because the new bilingual model contains many cyclic cliques, exact inference is intractable. We employ a dual decomposition (DD) inference algorithm (Bertsekas, 1999; Rush et al., 2010) for performing approximate inference. Unlike most 1073 f1 f2 f3 f4 f5 f6 e1 e2 e3 e4 e5 e6 Xinhua News Agency Beijing Feb 16 B-ORG I-ORG I-ORG [O] B-LOC O O 新华社 , 北京 , 二月 十六 B-ORG O B-GPE O O O Figure 1: Example of NER labels between two word-aligned bilingual parallel sentences. The [O] tag is an example of a wrong tag assignment. The dashed alignment link between e3 and f2 is an example of alignment error. previous applications of the DD method in NLP, where the model typically factors over two components and agreement is to be sought between the two (Rush et al., 2010; Koo et al., 2010; DeNero and Macherey, 2011; Chieu and Teow, 2012), our method decomposes the larger graphical model into many overlapping components where each alignment edge forms a separate factor. We design clique potentials over the alignment-based edges to encourage entity tag agreements. Our method does not require any manual annotation of word alignments or named entities over the bilingual training data. The aforementioned BI-NER model assumes fixed alignment input given by an underlying word aligner. But the entity span and type predictions given by the NER models contain complementary information for correcting alignment errors. To capture this source of information, we present a novel extension that combines the BI-NER model with two uni-directional HMM-based alignment models, and perform joint decoding of NER and word alignments. The new model (denoted as BI-NER-WA) factors over five components: one NER model and one word alignment model for each language, plus a joint NER-alignment model which not only enforces NER label agreements but also facilitates message passing among the other four components. An extended DD decoding algorithm is again employed to perform approximate inference. We give a formal definition of the Bi-NER model in Section 2, and then move to present the Bi-NER-WA model in Section 3. 2 Bilingual NER by Agreement The inputs to our models are parallel sentence pairs (see Figure 1 for an example in English and Chinese). We denote the sentences as e (for English) and f (for Chinese). We assume access to two monolingual linear-chain CRF-based NER models that are already trained. The English-side CRF model assigns the following probability for a tag sequence ye: PCRFe (ye|e) = Q vi∈Ve ψ(vi) Q (vi,vj)∈De ω(vi, vj) Ze(e) where Ve is the set of vertices in the CRF and De is the set of edges. ψ(vi) and ω(vi, vj) are the node and edge clique potentials, and Ze(e) is the partition function for input sequence e under the English CRF model. We let k(ye) be the un-normalized log-probability of tag sequence ye, defined as: k(ye) = log  Y vi∈Ve ψ(vi) Y (vi,vj)∈De ω(vi, vj)   Similarly, we define model PCRFf and unnormalized log-probability l(yf) for Chinese. We also assume that a set of word alignments (A = {(i, j) : ei ↔fj}) is given by a word aligner and remain fixed in our model. For clarity, we assume ye and yf are binary variables in the description of our algorithms. The extension to the multi-class case is straight-forward and does not affect the core algorithms. 2.1 Hard Agreement We define a BI-NER model which imposes hard agreement of entity labels over aligned word pairs. At inference time, we solve the following opti1074 mization problem: max ye,yf log (PCRFe (ye)) + log PCRFf yf = max ye,yf k(ye) + l(yf) −log Ze(e) −log Zf(f) ≃max ye,yf k(ye) + l(yf) ∋ye i = yf j ∀(i, j) ∈A We dropped the Ze(e) and Zf(f) terms because they remain constant at inference time. The Lagrangian relaxation of this term is: L ye, yf, U  = k (ye) + l yf + X (i,j)∈A u(i, j)  ye i −yf j  where u(i, j) are the Lagrangian multipliers. Instead of solving the Lagrangian directly, we can form the dual of this problem and solve it using dual decomposition (Rush et al., 2010): min U max ye  k (ye) + X (i,j)∈A u(i, j)ye i   + max yf  l yf − X (i,j)∈A u(i, j)yf j   ! Similar to previous work, we solve this DD problem by iteratively updating the sub-gradient as depicted in Algorithm 1. T is the maximum number of iterations before early stopping, and αt is the learning rate at time t. We adopt a learning rate update rule from Koo et al. (2010) where αt is defined as 1 N , where N is the number of times we observed a consecutive dual value increase from iteration 1 to t. A thorough introduction to the theoretical foundations of dual decomposition algorithms is beyond the scope of this paper; we encourage unfamiliar readers to read Rush and Collins (2012) for a full tutorial. 2.2 Soft Agreement The previously discussed hard agreement model rests on the core assumption that aligned words must have identical entity tags. In reality, however, this assumption does not always hold. Firstly, assuming words are correctly aligned, their entity tags may not agree due to inconsistency in annotation standards. In Figure 1, for example, the Algorithm 1 DD inference algorithm for hard agreement model. ∀(i, j) ∈A : u(i, j) = 0 for t ←1 to T do ye∗←argmax k (ye) + P (i,j)∈A u(i, j)ye i yf∗←argmax l yf − P (i,j)∈A u(i, j)yf j if ∀(i, j) ∈A: ye∗ i = yf∗ j then return ye∗, yf∗ end if for all (i, j) ∈A do u(i, j) ←u(i, j) + αt  yf∗ j −ye∗ i  end for end for return ye∗ (T), yf∗ (T)  word “Beijing” can be either a Geo-Political Entity (GPE) or a location. The Chinese annotation standard may enforce that “Beijing” should always be tagged as GPE when it is mentioned in isolation, while the English standard may require the annotator to judge based on word usage context. The assumption in the hard agreement model can also be violated if there are word alignment errors. In order to model this uncertainty, we extend the two previously independent CRF models into a larger undirected graphical model, by introducing a cross-lingual edge factor φ(i, j) for every pair of word positions (i, j) ∈A. We associate a clique potential function h(i,j)(ye i, yf j) for φ(i, j): h(i,j)  ye i, yf j  = pmi  ye i, yf j  ˆP(ei,fj) where pmi(ye i, yf j) is the point-wise mutual information (PMI) of the tag pair, and we raise it to the power of a posterior alignment probability ˆP(ei, fj). For a pair of NEs that are aligned with low probability, we cannot be too sure about the association of the two NEs, therefore the model should not impose too much influence from the bilingual agreement model; instead, we will let the monolingual NE models make their decisions, and trust that those are the best estimates we can come up with when we do not have much confidence in their bilingual association. The use of the posterior alignment probability facilitates this purpose. Initially, each of the cross-lingual edge factors will attempt to assign a pair of tags that has the highest PMI score, but if the monolingual taggers do not agree, a penalty will start accumulating over this pair, until some other pair that agrees better with the monolingual models takes the top spot. 1075 Simultaneously, the monolingual models will also be encouraged to agree with the cross-lingual edge factors. This way, the various components effectively trade penalties indirectly through the crosslingual edges, until a tag sequence that maximizes the joint probability is achieved. Since we assume no bilingually annotated NER corpus is available, in order to get an estimate of the PMI scores, we first tag a collection of unannotated bilingual sentence pairs using the monolingual CRF taggers, and collect counts of aligned entity pairs from this auto-generated tagged data. Each of the φ(i, j) edge factors (e.g., the edge between node f3 and e4 in Figure 1) overlaps with each of the two CRF models over one vertex (e.g., f3 on Chinese side and e4 on English side), and we seek agreement with the Chinese CRF model over tag assignment of fj, and similarly for ei on English side. In other words, no direct agreement between the two CRF models is enforced, but they both need to agree with the bilingual edge factors. The updated optimization problem becomes: max ye(k)yf(l)ye(h)yf(h)k  ye(k) + l  yf (l) + X (i,j)∈A h(i,j)  ye(h) i , yf(h) j  ∋∀(i, j) ∈A:  ye(k) i = ye(h) i  ∧  yf(l) j = yf(h) j  where the notation ye(k) i denotes tag assignment to word ei by the English CRF and ye(h) i denotes assignment to word ei by the bilingual factor; yf(l) j denotes the tag assignment to word fj by the Chinese CRF and yf(h) j denotes assignment to word fj by the bilingual factor. The updated DD algorithm is illustrated in Algorithm 2 (case 2). We introduce two separate sets of dual constraints we and wf, which range over the set of vertices on their respective half of the graph. Decoding the edge factor model h(i,j)(ye i, yf j) simply involves finding the pair of tag assignments that gives the highest PMI score, subject to the dual constraints. The way DD algorithms work in decomposing undirected graphical models is analogous to other message passing algorithms such as loopy belief propagation, but DD gives a stronger optimality guarantee upon convergence (Rush et al., 2010). 3 Joint Alignment and NER Decoding In this section we develop an extended model in which NER information can in turn be used to improve alignment accuracy. Although we have seen more than a handful of recent papers that apply the dual decomposition method for joint inference problems, all of the past work deals with cases where the various model components have the same inference output space (e.g., dependency parsing (Koo et al., 2010), POS tagging (Rush et al., 2012), etc.). In our case the output space is the much more complex joint alignment and NER tagging space. We propose a novel dual decomposition variant for performing inference over this joint space. Most commonly used alignment models, such as the IBM models and HMM-based aligner are unsupervised learners, and can only capture simple distortion features and lexical translational features due to the high complexity of the structure prediction space. On the other hand, the CRFbased NER models are trained on manually annotated data, and admit richer sequence and lexical features. The entity label predictions made by the NER model can potentially be leveraged to correct alignment mistakes. For example, in Figure 1, if the tagger knows that the word “Agency” is tagged I-ORG, and if it also knows that the first comma in the Chinese sentence is not part of any entity, then we can infer it is very unlikely that there exists an alignment link between “Agency” and the comma. To capture this intuition, we extend the BI-NER model to jointly perform word alignment and NER decoding, and call the resulting model BI-NERWA. As a first step, instead of taking the output from an aligner as fixed input, we incorporate two uni-directional aligners into our model. We name the Chinese-to-English aligner model as m(Be) and the reverse directional model n(Bf). Be is a matrix that holds the output of the Chinese-toEnglish aligner. Each be(i, j) binary variable in Be indicates whether fj is aligned to ei; similarly we define output matrix Bf and bf(i, j) for Chinese. In our experiments, we used two HMMbased alignment models. But in principle we can adopt any alignment model as long as we can perform efficient inference over it. We introduce a cross-lingual edge factor ζ(i, j) in the undirected graphical model for every pair of word indices (i, j), which predicts a binary vari1076 Algorithm 2 DD inference algorithm for joint alignment and NER model. A line marked with (2) means it applies to the BI-NER model; a line marked with (3) means it applies to the BI-NER-WA model. S ←A (2) S ←{(i, j): ∀i ∈|e|, ∀j ∈|f|} (3) ∀i ∈|e| : we i = 0; ∀j ∈|f| : wf j = 0 (2,3) ∀(i, j) ∈S : de(i, j) = 0, df(i, j) = 0 (3) for t ←1 to T do ye(k)∗←argmax k  ye(k) + P i∈|e| we i ye(k) i (2,3) yf(l)∗←argmax l  yf(l) + P i∈|f| wf j yf(l) j (2,3) Be∗←argmax m (Be) + P (i,j) de(i, j)be(i, j) (3) Bf∗←argmax n Bf + P (i,j) df(i, j)bf(i, j) (3) for all (i, j) ∈S do (ye(h)∗ i yf(h)∗ j )←−we i ye(h) i −wf j yf(h) j + argmax h(i,j)(ye(q) i yf(q) j ) (2) (ye(q)∗ i yf(q)∗ j a(i, j)∗)←−we i ye(q) i −wf j yf(q) j + argmax q(i,j)(ye(q) i yf(q) j a(i, j)) −de(i, j)a(i, j) −df(i, j)a(i, j) (3) end for Conv = (ye(k)=ye(q) ∧yf(l)=yf(q)) (2) Conv = (Be=A=Bf ∧ye(k)=ye(q)∧yf(l)=yf(q)) (3) if Conv = true , then return  ye(k)∗, yf(l)∗ (2) return  ye(k)∗, yf(l)∗, A  (3) else for all i ∈|e| do we i ←we i + αt  ye(q|h)∗ i −ye(k)∗ i  (2,3) end for for all j ∈|f| do wf j ←wf j + αt  yf(q|h)∗ j −yf(l)∗ j  (2,3) end for for all (i, j) ∈S do de(i, j) ←de(i, j) + αt (ae∗(i, j) −be∗(i, j)) (3) df(i, j) ←df(i, j) + αt af∗(i, j) −bf∗(i, j)  (3) end for end if end for return  ye(k)∗ (T) , yf(l)∗ (T)  (2) return  ye(k)∗ (T) , yf(l)∗ (T) , A(T )  (3) able a(i, j) for an alignment link between ei and fj. The edge factor also predicts the entity tags for ei and fj. The new edge potential q is defined as: q(i,j)  ye i, yf j, a(i, j)  = log(P(a(i, j) = 1)) + S(ye i, yf j|a(i, j))P(a(i,j)=1) S(ye i, yf j|a(i, j))= ( pmi(ye i, yf j), if a(i, j) = 1 0, else P(a(i, j) = 1) is the alignment probability assigned by the bilingual edge factor between node ei and fj. We initialize this value to ˆP(ei, fj) = 1 2(Pm(ei, fj) + Pn(ei, fj)), where Pm(ei, fj) and Pn(ei, fj) are the posterior probabilities assigned by the HMM-aligners. The joint optimization problem is defined as: max ye(k)yf(l)ye(h)yf(h)BeBfA k(ye(k)) + l(yf (l))+ m(Be) + n(Bf) + X (i∈|e|,j∈|f|) q(i,j)(yeh i , yf(h) j , a(i, j)) ∋∀(i, j): be(i, j)=a(i, j)  ∧ bf(i, j)=a(i, j)  ∧if a(i, j) = 1 then ye(k) i =ye(h) i  ∧ yf(l) j =yf(h) j  We include two dual constraints de(i, j) and df(i, j) over alignments for every bilingual edge factor ζ(i, j), which are applied to the English and Chinese sides of the alignment space, respectively. The DD algorithm used for this model is given in Algorithm 2 (case 3). One special note is that after each iteration when we consider updates to the dual constraint for entity tags, we only check tag agreements for cross-lingual edge factors that have an alignment assignment value of 1. In other words, cross-lingual edges that are not aligned do not affect bilingual NER tagging. Similar to φ(i, j), ζ(i, j) factors do not provide that much additional information other than some selectional preferences via PMI score. But the real power of these cross-language edge cliques is that they act as a liaison between the NER and alignment models on each language side, and encourage these models to indirectly agree with each other by having them all agree with the edge cliques. It is also worth noting that since we decode the alignment models with Viterbi inference, additional constraints such as the neighborhood constraint proposed by DeNero and Macherey (2011) can be easily integrated into our model. The neighborhood constraint enforces that if fj is aligned to ei, then fj can only be aligned to ei+1 or ei−1 (with a small penalty), but not any other word position. We report results of adding neighborhood constraints to our model in Section 6. 4 Experimental Setup We evaluate on the large OntoNotes (v4.0) corpus (Hovy et al., 2006) which contains manually 1077 annotated NER tags for both Chinese and English. Document pairs are sentence aligned using the Champollion Tool Kit (Ma, 2006). After discarding sentences with no aligned counterpart, a total of 402 documents and 8,249 parallel sentence pairs were used for evaluation. We will refer to this evaluation set as full-set. We use odd-numbered documents as the dev set and evennumbered documents as the blind test set. We did not perform parameter tuning on the dev set to optimize performance, instead we fix the initial learning rate to 0.5 and maximum iterations to 1,000 in all DD experiments. We only use the dev set for model development. The Stanford CRF-based NER tagger was used as the monolingual component in our models (Finkel et al., 2005). It also serves as a stateof-the-art monolingual baseline for both English and Chinese. For English, we use the default tagger setting from Finkel et al. (2005). For Chinese, we use an improved set of features over the default tagger, which includes distributional similarity features trained on large amounts of nonoverlapping data.1 We train the two CRF models on all portions of the OntoNotes corpus that are annotated with named entity tags, except the parallel-aligned portion which we reserve for development and test purposes. In total, there are about 660 training documents (∼16k sentences) for Chinese and 1,400 documents (∼39k sentences) for English. Out of the 18 named entity types that are annotated in OntoNotes, which include person, location, date, money, and so on, we select the four most commonly seen named entity types for evaluation. They are person, location, organization and GPE. All entities of these four types are converted to the standard BIO format, and background tokens and all other entity types are marked with tag O. When we consider label agreements over aligned word pairs in all bilingual agreement models, we ignore the distinction between B- and Itags. We report standard NER measures (entity precision (P), recall (R) and F1 score) on the test set. Statistical significance tests are done using the paired bootstrap resampling method (Efron and Tibshirani, 1993). For alignment experiments, we train two uni1The exact feature set and the CRF implementation can be found here: http://nlp.stanford.edu/ software/CRF-NER.shtml directional HMM models as our baseline and monolingual alignment models. The parameters of the HMM were initialized by IBM Model 1 using the agreement-based EM training algorithms from Liang et al. (2006). Each model is trained for 2 iterations over a parallel corpus of 12 million English words and Chinese words, almost twice as much data as used in previous work that yields state-of-the-art unsupervised alignment results (DeNero and Klein, 2008; Haghighi et al., 2009; DeNero and Macherey, 2011). Word alignment evaluation is done over the sections of OntoNotes that have matching goldstandard word alignment annotations from GALE Y1Q4 dataset.2 This subset contains 288 documents and 3,391 sentence pairs. We will refer to this subset as wa-subset. This evaluation set is over 20 times larger than the 150 sentences set used in most past evaluations (DeNero and Klein, 2008; Haghighi et al., 2009; DeNero and Macherey, 2011). Alignments input to the BI-NER model are produced by thresholding the averaged posterior probability at 0.5. In joint NER and alignment experiments, instead of posterior thresholding, we take the direct intersection of the Viterbi-best alignment of the two directional models. We report the standard P, R, F1 and Alignment Error Rate (AER) measures for alignment experiments. An important past work to make comparisons with is Burkett et al. (2010b). Their method is similar to ours in that they also model bilingual agreement in conjunction with two CRFbased monolingual models. But instead of using just the PMI scores of bilingual NE pairs, as in our work, they employed a feature-rich log-linear model to capture bilingual correlations. Parameters in their log-linear model require training with bilingually annotated data, which is not readily available. To counter this problem, they proposed an “up-training” method which simulates a supervised learning environment by pairing a weak classifier with strong classifiers, and train the bilingual model to rank the output of the strong classifier highly among the N-best outputs of the weak classifier. In order to compare directly with their method, we obtained the code behind Burkett et al. (2010b) and reproduced their experimental setting for the OntoNotes data. An extra set of 5,000 unannotated parallel sentence pairs are used for 2LDC Catalog No. LDC2006E86. 1078 Chinese English P R F1 P R F1 Mono 76.89 61.64 68.42 81.98 74.59 78.11 Burkett 77.52 65.84 71.20 82.28 76.64 79.36 Bi-soft 79.14 71.55 75.15 82.58 77.96 80.20 Table 1: NER results on bilingual parallel test set. Best numbers on each measure that are statistically significantly better than the monolingual baseline and Burkett et al. (2010b) are highlighted in bold. training the reranker, and the reranker model selection was performed on the development dataset. 5 Bilingual NER Results The main results on bilingual NER over the test portion of full-set are shown in Table 1. We initially experimented with the hard agreement model, but it performs quite poorly for reasons we discussed in Section 2.2. The BI-NER model with soft agreement constraints, however, significantly outperforms all baselines. In particular, it achieves an absolute F1 improvement of 6.7% in Chinese and 2.1% in English over the CRF monolingual baselines. A well-known issue with the DD method is that when the model does not necessarily converge, then the procedure could be very sensitive to hyper-parameters such as initial step size and early termination criteria. If a model only gives good performance with well-tuned hyperparameters, then we must have manually annotated data for tuning, which would significantly reduce the applicability and portability of this method to other language pairs and tasks. To evaluate the parameter sensitivity of our model, we run the model from 50 to 3000 iterations before early stopping, and with 6 different initial step sizes from 0.01 to 1. The results are shown in Figure 2. The soft agreement model does not seem to be sensitive to initial step size and almost always converges to a superior solution than the baseline. 6 Joint NER and Alignment Results We present results for the BI-NER-WA model in Table 2. By jointly decoding NER with word alignment, our model not only maintains significant improvements in NER performance, but also yields significant improvements to alignment performance. Overall, joint decoding with NER alone yields a 10.8% error reduction in AER over the baseline HMM-aligners, and also gives improve0 0.01 0.05 0.1 0.2 0.5 1 2 3000 1000 800 500 300 100 50 73 74 75 76 77 78 79 80 initial step size max no. of iterations F1 score Figure 2: Performance variance of the soft agreement models on the Chinese dev dataset, as a function of step size (x-axis) and maximum number of iterations before early stopping (y-axis). ment over BI-NER in NER. Adding additional neighborhood constraints gives a further 6% error reduction in AER, at the cost of a small loss in Chinese NER. In terms of word alignment results, we see great increases in F1 and recall, but precision goes down significantly. This is because the joint decoding algorithm promotes an effect of “soft-union”, by encouraging the two unidirectional aligners to agree more often. Adding the neighborhood constraints further enhances this union effect. 7 Error Analysis and Discussion We can examine the example in Figure 3 to gain an understanding of the model’s performance. In this example, a snippet of a longer sentence pair is shown with NER and word alignment results. The monolingual Chinese tagger provides a strong cue that word f6 is a person name because the unique 4-character word pattern is commonly associated with foreign names in Chinese, and also the word is immediately preceded by the word “president”. The English monolingual tagger, however, confuses the aligned word e0 with a GPE. Our bilingual NER model is able to correct this error as expected. Similarly, the bilingual model corrects the error over e11. However, the model also propagates labeling errors from the English side over the entity “Tibet Autonomous Region” to the Chinese side. Nevertheless, the resulting Chinese tags are arguably more useful than the original tags assigned by the baseline model. In terms of word alignment, the HMM models failed badly on this example because of the long 1079 NER-Chinese NER-English word alignment P R F1 P R F1 P R F1 AER HMM-WA 90.43 40.95 56.38 43.62 Mono-CRF 82.50 66.58 73.69 84.24 78.70 81.38 Bi-NER 84.87 75.30 79.80 84.47 81.45 82.93 Bi-NER-WA 84.42 76.34 80.18 84.25 82.20 83.21 77.45 50.43 61.09 38.91 Bi-NER-WA+NC 84.25 75.09 79.41 84.28 82.17 83.21 76.67 54.44 63.67 36.33 Table 2: Joint alignment and NER test results. +NC means incorporating additional neighbor constraints from DeNero and Macherey (2011) to the model. Best number in each column is highlighted in bold. f0 f1 f2 f3 f4 f5 f6 e0 e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 Suolangdaji , president of Tibet Auto. Region branch of Bank of China B-PER O O O B-GPE I-GPE I-GPE O O B-ORG I-ORG I-ORG B-PER O O O [B-LOC] [I-LOC] [I-LOC] O O B-ORG I-ORG I-ORG [B-GPE] O O O [B-LOC] [I-LOC] [I-LOC] O O [O] [O] [B-GPE] 中国 银行 西藏 自治区 分行 行长 索朗达吉 B-ORG I-ORG B-GPE O O O B-PER B-ORG I-ORG [B-LOC] [I-LOC] O O B-PER B-ORG I-ORG [O] O O O B-PER Figure 3: An example output of our BI-NER-WA model. Dotted alignment links are the oracle, dashed links are alignments from HMM baseline, and solid links are outputs of our model. Entity tags in the gold line (closest to nodes ei and fj) are the gold-standard tags; in the green line (second closest to nodes) are output from our model; and in the crimson line (furthest from nodes) are baseline output. distance swapping phenomena. The two unidirectional HMMs also have strong disagreements over the alignments, and the resulting baseline aligner output only recovers two links. If we were to take this alignment as fixed input, most likely we would not be able to recover the error over e11, but the joint decoding method successfully recovered 4 more links, and indirectly resulted in the NER tagging improvement discussed above. 8 Related Work The idea of employing bilingual resources to improve over monolingual systems has been explored by much previous work. For example, Huang et al. (2009) improved parsing performance using a bilingual parallel corpus. In the NER domain, Li et al. (2012) presented a cyclic CRF model very similar to our BI-NER model, and performed approximate inference using loopy belief propagation. The feature-rich CRF formulation of bilingual edge potentials in their model is much more powerful than our simple PMI-based bilingual edge model. Adding a richer bilingual edge model might well further improve our results, and this is a possible direction for further experimentation. However, a big drawback of this approach is that training such a feature-rich model requires manually annotated bilingual NER data, which can be prohibitively expensive to generate. How and where to obtain training signals without manual supervision is an interesting and open question. One of the most interesting papers in this regard is Burkett et al. (2010b), which explored an “up-training” mechanism by using the outputs from a strong monolingual model as ground-truth, and simulated a learning environment where a bilingual model is trained to help a “weakened” monolingual model to recover the results of the strong model. It is worth mentioning that since our method does not require additional training and can take pretty much any existing model as “black-box” during decoding, the richer and more accurate bilingual model learned from Burkett et al. (2010b) can be directly plugged into our model. A similar dual decomposition algorithm to ours was proposed by Riedel and McCallum (2011) for biomedical event detection. In their Model 3, the trigger and argument extraction models are reminiscent of the two monolingual CRFs in our model; additional binding agreements are enforced over every protein pair, similar to how we enforce agreement between every aligned word 1080 pair. Martins et al. (2011b) presented a new DD method that combines the power of DD with the augmented Lagrangian method. They showed that their method can achieve faster convergence than traditional sub-gradient methods in models with many overlapping components (Martins et al., 2011a). This method is directly applicable to our work. Another promising direction for improving NER performance is in enforcing global label consistency across documents, which is an idea that has been greatly explored in the past (Sutton and McCallum, 2004; Bunescu and Mooney, 2004; Finkel et al., 2005). More recently, Rush et al. (2012) and Chieu and Teow (2012) have shown that combining local prediction models with global consistency models, and enforcing agreement via DD is very effective. It is straightforward to incorporate an additional global consistency model into our model for further improvements. Our joint alignment and NER decoding approach is inspired by prior work on improving alignment quality through encouraging agreement between bi-directional models (Liang et al., 2006; DeNero and Macherey, 2011). Instead of enforcing agreement in the alignment space based on best sequences found by Viterbi, we could opt to encourage agreement between posterior probability distributions, which is related to the posterior regularization work by Grac¸a et al. (2008). Cromi`eres and Kurohashi (2009) proposed an approach that takes phrasal bracketing constraints from parsing outputs, and uses them to enforce phrasal alignments. This idea is similar to our joint alignment and NER approach, but in our case the phrasal constraints are indirectly imposed by entity spans. We also differ in the implementation details, where in their case belief propagation is used in both training and Viterbi inference. Burkett et al. (2010a) presented a supervised learning method for performing joint parsing and word alignment using log-linear models over parse trees and an ITG model over alignment. The model demonstrates performance improvements in both parsing and alignment, but shares the common limitations of other supervised work in that it requires manually annotated bilingual joint parsing and word alignment data. Chen et al. (2010) also tackled the problem of joint alignment and NER. Their method employs a set of heuristic rules to expand a candidate named entity set generated by monolingual taggers, and then rank those candidates using a bilingual named entity dictionary. Our approach differs in that we provide a probabilistic formulation of the problem and do not require pre-existing NE dictionaries. 9 Conclusion We introduced a graphical model that combines two HMM word aligners and two CRF NER taggers into a joint model, and presented a dual decomposition inference method for performing efficient decoding over this model. Results from NER and word alignment experiments suggest that our method gives significant improvements in both NER and word alignment. Our techniques make minimal assumptions about the underlying monolingual components, and can be adapted for many other tasks such as parsing. Acknowledgments The authors would like to thank Rob Voigt and the three anonymous reviewers for their valuable comments and suggestions. We gratefully acknowledge the support of the National Natural Science Foundation of China (NSFC) via grant 61133012, the National “863” Project via grant 2011AA01A207 and 2012AA011102, the Ministry of Education Research of Social Sciences Youth funded projects via grant 12YJCZH304, and the support of the U.S. Defense Advanced Research Projects Agency (DARPA) Broad Operational Language Translation (BOLT) program through IBM. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, or the US government. References Yaser Al-Onaizan and Kevin Knight. 2002. Translating named entities using monolingual and bilingual resources. In Proceedings of ACL. Bogdan Babych and Anthony Hartley. 2003. Improving machine translation quality with automatic named entity recognition. In Proceedings of the 7th International EAMT workshop on MT and other Language Technology Tools, Improving MT through other Language Technology Tools: Resources and Tools for Building MT. 1081 Dimitri P. Bertsekas. 1999. Nonlinear Programming. Athena Scientific, New York. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1991. Wordsense disambiguation using statistical methods. In Proceedings of ACL. Razvan Bunescu and Raymond J. Mooney. 2004. Collective information extraction with relational Markov networks. In Proceedings of ACL. David Burkett, John Blitzer, and Dan Klein. 2010a. Joint parsing and alignment with weakly synchronized grammars. In Proceedings of NAACL-HLT. David Burkett, Slav Petrov, John Blitzer, and Dan Klein. 2010b. Learning better monolingual models with unannotated bilingual text. In Proceedings of CoNLL. Yufeng Chen, Chengqing Zong, and Keh-Yih Su. 2010. On jointly recognizing and aligning bilingual named entities. In Proceedings of ACL. Hai Leong Chieu and Loo-Nin Teow. 2012. Combining local and non-local information with dual decomposition for named entity recognition from text. In Proceedings of 15th International Conference on Information Fusion (FUSION). Fabien Cromi`eres and Sadao Kurohashi. 2009. An alignment algorithm using belief propagation and a structure-based distortion model. In Proceedings of EACL/ IJCNLP. John DeNero and Dan Klein. 2008. The complexity of phrase alignment problems. In Proceedings of ACL. John DeNero and Klaus Macherey. 2011. Modelbased aligner combination using dual decomposition. In Proceedings of ACL. Brad Efron and Robert Tibshirani. 1993. An Introduction to the Bootstrap. Chapman & Hall, New York. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of ACL. Joao Grac¸a, Kuzman Ganchev, and Ben Taskar. 2008. Expectation maximization and posterior constraints. In Proceedings of NIPS. Aria Haghighi, John Blitzer, John DeNero, and Dan Klein. 2009. Better word alignments with supervised ITG models. In Proceedings of ACL. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: the 90% solution. In Proceedings of NAACL-HLT. Fei Huang and Stephan Vogel. 2002. Improved named entity translation and bilingual named entity extraction. In Proceedings of the 2002 International Conference on Multimodal Interfaces (ICMI). Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proceedings of EMNLP. Sungchul Kim, Kristina Toutanova, and Hwanjo Yu. 2012. Multilingual named entity recognition using parallel data and metadata from Wikipedia. In Proceedings of ACL. Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of EMNLP. Qi Li, Haibo Li, Heng Ji, Wen Wang, Jing Zheng, and Fei Huang. 2012. Joint bilingual name tagging for parallel corpora. In Proceedings of CIKM. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of HLT-NAACL. Xiaoyi Ma. 2006. Champollion: A robust parallel text sentence aligner. In Proceedings of LREC. Andr´e F. T. Martins, Noah A. Smith, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2011a. Dual decomposition with many overlapping components. In Proceedings of EMNLP. Andre F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2011b. Augmenting dual decomposition for map inference. In Proceedings of the International Workshop on Optimization for Machine Learning (OPT 2010). Sebastian Riedel and Andrew McCallum. 2011. Fast and robust joint models for biomedical event extraction. In Proceedings of EMNLP. Alexander M. Rush and Michael Collins. 2012. A tutorial on dual decomposition and Lagrangian relaxation for inference in natural language processing. JAIR, 45:305–362. Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of EMNLP. Alexander M. Rush, Roi Reichert, Michael Collins, and Amir Globerson. 2012. Improved parsing and POS tagging using inter-sentence consistency constraints. In Proceedings of EMNLP. Charles Sutton and Andrew McCallum. 2004. Collective segmentation and labeling of distant entities in information extraction. In Proceedings of ICML Workshop on Statistical Relational Learning and Its connections to Other Fields. 1082
2013
106
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1083–1093, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Resolving Entity Morphs in Censored Data Hongzhao Huang1, Zhen Wen2, Dian Yu1, Heng Ji1, Yizhou Sun3, Jiawei Han4, He Li5 1Computer Science Department and Linguistics Department, Queens College and Graduate Center, City University of New York, New York, NY, USA 2IBM T. J. Watson Research Center, Hawthorne, NY, USA 3College of Computer and Information Science, Northeastern University, Boston, MA, USA 4Computer Science Department, Univerisity of Illinois at Urbana-Champaign, Urbana, IL, USA 5Admaster Inc., China {hongzhaohuang1,yudiandoris1,hengjicuny1, liheact5}@gmail.com, [email protected], [email protected], [email protected] Abstract In some societies, internet users have to create information morphs (e.g. “Peace West King” to refer to “Bo Xilai”) to avoid active censorship or achieve other communication goals. In this paper we aim to solve a new problem of resolving entity morphs to their real targets. We exploit temporal constraints to collect crosssource comparable corpora relevant to any given morph query and identify target candidates. Then we propose various novel similarity measurements including surface features, meta-path based semantic features and social correlation features and combine them in a learning-to-rank framework. Experimental results on Chinese Sina Weibo data demonstrate that our approach is promising and significantly outperforms baseline methods1. 1 Introduction Language constantly evolves to maximize communicative success and expressive power in daily social interactions. The proliferation of online social media significantly expedites this evolution, as new phrases triggered by social events may be disseminated rapidly in social media. To automatically analyze such fast evolving language in social media, new computational models are demanded. In this paper, we focus on one particular language evolution that creates new ways to communicate sensitive subjects because of the existence of internet information censorship. We call this 1Some of the resources and open source programs developed in this work are made freely available for research purpose at http://nlp.cs.qc.cuny.edu/Morphing.tar.gz phenomenon information morph. For example, when Chinese online users talk about the former politician “Bo Xilai”, they use a morph “Peace West King” instead, a historical figure four hundreds years ago who governed the same region as Bo. Morph can be considered as a special case of alias used for hiding true entities in malicious environment (Hsiung et al., 2005; Pantel, 2006). However, social network plays an important role in generating morphs. Usually morphs are generated by harvesting the collective wisdom of the crowd to achieve certain communication goals. Aside from the purpose of avoiding censorship, other motivations for using morph include expressing sarcasm/irony, positive/negative sentiment or making descriptions more vivid toward some entities or events. Table 1 presents the wide range of cases that are used to create the morphs. We can see that a morph can be either a regular term with new meaning or a newly created term. Morph Target Motivation Peace West King Bo Xilai Sensitive Blind Man Chen Guangcheng Sensitive Miracle Brother Wang Yongping Irony Kim Fat Kim Joing-il Negative Kimchi Country South Korea Vivid Table 1: Morph Examples and Motivations. We believe that successful resolution of morphs is a crucial step for automated understanding of the fast evolving social media language, which is important for social media marketing (Barwise and Meehan, 2010). Another application is to help common users without enough background/cultural knowledge to understand internet language for their daily use. Furthermore, our approaches can also be applied for satire or other implicit meaning recognition, as well as information extraction (Bollegala et al., 2011). 1083 However, morph resolution in social media is challenging due to the following reasons. First, the sensitive real targets that exist in the same data source under active censorship are often automatically filtered. Table 2 presents the distributions of some examples of morphs and their targets in English Twitter and Chinese Sina Weibo. For example, the target “Chen Guangcheng” only appears once in Weibo. Thus, the co-occurrence of a morph and its target is quite low in the vast amount of information in social media. Second, most morphs were not created based on pronunciations, spellings or other encryptions of their original targets. Instead, they were created according to semantically related entities in historical and cultural narratives (e.g. “Peace West King” as morph of “Bo Xilai”) and thus very difficult to capture based on typical lexical features. Third, tweets from Twitter/Chinese Weibo are short (only up to 140 characters) and noisy, resulting in difficult extraction of rich and accurate evidences due to the lack of enough contexts. Frequency in Twitter Frequency in Weibo Morph Target Morph Target Morph Target Hu Ji Hu Jintao 1 3,864 2,611 71 Blind Man Chen Guangcheng 18 2,743 20,941 1 Baby Wen Jiabao 2238 2021 26,279 8 Table 2: Distributions of Morph Examples To the best of our knowledge, this is the first work to use NLP and social network analysis techniques to automatically resolve morphed information. To address the above challenges, our paper offers the following novel contributions. • We detect target candidates by exploiting the dynamics of the social media to extract temporal distribution of entities, based on the assumption that the popularity of an individual is correlated between censored and uncensored text within a certain time window. • Our approach builds and analyzes heterogeneous information networks from multiple sources, such as Twitter, Sina Weibo and web documents in formal genre (e.g. news) because a morph and its target tend to appear in similar contexts. • We propose two new similarity measures, as well as integrating temporal information into the similarity measures to generate global semantic features. • We model social user behaviors and use social correlation to assist in measuring semantic similarities because the users who posted a morph and its corresponding target tend to share similar interests and opinions. Our experiments demonstrate that the proposed approach significantly outperforms traditional alias detection methods (Hsiung et al., 2005). 2 Approach Overview Morph Query Comparable Data Acquisition Target Candidate Ranking Target Learning to Rank Semantic Features Semantic Annotation and Target Candidate Identification Surface Features Social Features Censored Data Uncensored Data Figure 1: Overview of Morph Resolution Given a morph query m, the goal of morph resolution is to find its real target. Figure 1 depicts the general procedure of our approach. It consists of two main sub-tasks: • Target Candidate Identification: For each m, discover a list of target candidates E = {e1, e2, ..., eN}. First, relevant comparable data sets that include m are retrieved. In this paper we collect comparable censored data from Weibo and uncensored data from Twitter and Web documents such as news articles. We then apply various annotations such as word segmentation, part-of-speech tagging, noun phrase chunking, name tagging and event extraction to these data sets. • Target Candidate Ranking: Rank the target candidates in E. We explore various features including surface, semantic and social features, and incorporate them into a learning to 1084 rank framework. Finally, the top ranked candidate is produced as the resolved target. 3 Target Candidate Identification The general goal of the first step is to identify a list of target candidates for each morph query from the comparable corpora including Sina Weibo, Chinese News websites and English Twitter. However, obviously we cannot consider all of the named entities in these sources as target candidates due to the sheer volume of information. In addition, morphs are not limited to named entity forms. In order to narrow down the scope of target candidates, we propose a Temporal Distribution Assumption as follows. The intuition is that a morph m and its real target e should have similar temporal distributions in terms of their occurrences. Suppose the data sets are separated into Z temporal slots (e.g. by day), the assumption can be stated as: Let Tm = {tm1, tm2, ..., tmZm} be the set of temporal slots each morph m occurs, and Te = {te1, te2, ..., teZe} be the set of slots a target candidate e occurs. Then e is considered as a target candidate of m if and only if, for each tmi ∈Tm (i = 1, 2, ..., Zm), there exist a j ∈{1, 2, ..., Ze} such that tmi −tej ≤δ, where δ is a threshold value (in this paper we set the threshold to 7 days, which is optimized from a development set). For comparison we also attempted topic modeling approach to detect target candidates, as shown in section 5.3. 4 Target Candidate Ranking Next, we propose a learning-to-rank framework to rank target candidates based on various levels of novel features based on surface, semantic and social analysis. 4.1 Surface Features We first extract surface features between the morph and the candidate based on measuring orthographic similarity measures which were commonly used in entity coreference resolution (e.g. (Ng, 2010; Hsiung et al., 2005)). The measures we use include “string edit distance”, “normalized string edit distance” (Wagner and Fischer, 1974) and “longest common subsequence” (Hirschberg, 1977). 4.2 Semantic Features 4.2.1 Motivations Fortunately, although a morph and its target may have very different orthographic forms, they tend to be embedded in similar semantic contexts which involve similar topics and events. Figure 2 presents some example messages under censorship (Weibo) and not under censorship (Twitter and Chinese Daily). We can see that they include similar topics, events (e.g., “fell from power”, “gang crackdown”, “sing red songs”), and semantic relations (e.g., family relations with “Bo Guagua”). Therefore if we can automatically extract and exploit these indicative semantic contexts, we can narrow down the real targets effectively.  Peace West King from Chongqing fell from power, still need to sing red songs?  There is no difference between that guy’s plagiarism and Buhou’s gang crackdown.  Remember that Buhou said that his family was not rich at the press conference a few days before he fell from power. His son Bo Guagua is supported by his scholarship.  Bo Xilai: ten thousand letters of accusation have been received during Chongqing gang crackdown.  The webpage of “Tianze Economic Study Institute” owned by the liberal party has been closed. This is the first affected website of the liberal party after Bo Xilai fell from power.  Bo Xilai gave an explanation about the source of his son, Bo Guagua’s tuition.  Bo Xilai led Chongqing city leaders and 40 district and county party and government leaders to sing red songs. Weibo (censored) Twitter and Chinese News (uncensored) Figure 2: Cross-source Comparable Data Example (each morph and target pair is shown in the same color) 4.2.2 Information Network Construction We define an information network as a directed graph G = (V, E) with an object type mapping function τ : V →A and a link type mapping function φ : E →R, where each object v ∈V belongs to one particular object type τ(v) ∈A, each link e ∈E belongs to a particular relation φ(e) ∈R. If two links belong to the same relation type, then they share the same starting object type as well as the same ending object type. An information network is homogeneous if and only if there is only one type for both objects and links, and an information network is heterogeneous when the objects are from multiple distinct types or there exist more than one type of links. In order to construct the information networks for morphs, we apply the Standford Chinese word 1085 segmenter with Chinese Penn Treebank segmentation standard (Chang et al., 2008) and Stanford part-of-speech tagger (Toutanova et al., 2003) to process each sentence in the comparable data sets. Then we apply a hierarchical Hidden Markov Model (HMM) based Chinese lexical analyzer ICTCLAS (Zhang et al., 2003) to extract named entities, noun phrases and events. We have also attempted using the results from Dependency Parsing, Relation Extraction and Event Extraction tools (Ji and Grishman, 2008) to enrich the link types. Unfortunately the stateof-the-art techniques for these tasks still perform poorly on social media in terms of both accuracy and coverage of important information, these sophisticated semantic links all produced negative impact on the target ranking performance. Therefore we limited the types of vertices into: Morph (M), Entity(E), which includes target candidates, Event (EV), and Non-Entity Noun Phrases (NP); and used co-occurrence as the edge type. We extract entities, events, and non-entity noun phrases that occur in more than one tweet as neighbors. And for two vertices xi and xj, the weight wij of their edge is the frequency they co-occur together within the tweets. A network schema of such networks is shown in Figure 3. Figure 4 M E NP EV Figure 3: Network Schema of Morph-Related Heterogeneous Information Network presents an example of a heterogeneous information network from the motivation examples following the above network schema, which connects the morphs “Peace West King”, “Buhou” and their corresponding target “Bo Xilai”. 4.2.3 Meta-Path-Based Semantic Similarity Measurements Given the constructed network, a straightforward solution for finding the target for a morph is to use link-based similarity search. However, now objects are linked to different types of neighbors, if all neighbors are treated as the same, it may cause information loss problems. For example, the entity “重庆(Chongqing)” is a very important aspect characterizing the politician “薄熙来(Bo Xilai)” Gang Crackdown Fell From Power Chongqing Sing Red Songs Buhou Peace West King Bo Xilai Bo Guagua Entity Entity Entity Event Event Event Morph Morph Figure 4: Example of Morph-Related Heterogeneous Information Network since he governed it, and if a morph m which is also highly correlated with “重庆(Chongqing)”, it is very likely that “Bo Xilai” is the real target of m. Therefore, the semantic features generated from neighbors such as the entity “重庆(Chongqing)” should be treated differently from other types of neighbors such as “人才(talented people)” . In this work, we propose to measure the similarity of two nodes over heterogeneous networks as shown in Figure 3, by distinguishing neighbors into three types according to the network schema (i.e. entities, events, non-entity noun phrases). We then adopt meta-path-based similarity measures (Sun et al., 2011a; Sun et al., 2011b), which are defined over heterogeneous networks to extract semantic features. A meta-path is a path defined over a network, and composed of a sequence of relations between different object types. For example, as shown in Figure 3, a morph and its target candidate can be connected by three meta-paths, including “M - E - E”, “M - EV - E”, and “M - NP - E”. Intuitively, each meta-path provides a unique angle to measure how similar two objects are. For the determined meta-paths, we extract semantic features using the similarity measures proposed in (Sun et al., 2011a; Hsiung et al., 2005). We denote the neighbor sets of certain type for a morph m and a target candidate e as Γ(m) and Γ(e), and a meta-path as P. We now list several meta-path-based similarity measures below. Common neighbors (CN). It measures the number of common neighbors that m and e share as |Γ(m) ∩Γ(e)|. Path count (PC). It measures the number of path instances between m and e following meta-path P. Pairwise random walk (PRW). For a metapath P that can be decomposed into two shorter 1086 meta-paths with the same length P = (P1P2), pairwise random walk measures the probability of the pairwise random walk starting from both m and e and reaching the same middle object. More formally, it is computed as P (p1p2)∈(P1P2) prob(p1)prob(p−1 2 ), where p−1 2 is the inverse of p2. Kullback-Leibler distance (KLD). For m and e, the pairwise random walk probability of their neighbors can be represented as two probability vectors, then Kullback-Leibler distance (Hsiung et al., 2005) can be used to compute sim(m, e). Beyond the above similarity measures, we also propose to use cosine-similarity-style normalization method to modify common neighbor and pairwise random walk measures so that we can ensure the morph node and the target candidate node are strongly connected and also have similar popularity. The modified algorithms penalize features involved with the highly popular objects, since they are more likely to have accidental interactions with each other. Normalized common neighbors (NCN). Normalized common neighbors can be measured as sim(m, e) = |Γ(m)∩Γ(e)| √ |Γ(m)|√ |Γ(e)|. It refines the simple counting of common neighbors by avoiding bias to highly visible or concentrated objects. Pairwise random walk/cosine (PRW/cosine). Pairwise random walk measures linkage weights disproportionately with their visibility to their neighbors, which may be too strong. Instead, we propose to use a tamer normalization method as P (p1p2)∈(P1P2) f(p1)f(p−1 2 ), where. f(p1) = count(m, x) pP x∈Ωcount(m, x), f(p2) = count(e, x) pP x∈Ωcount(e, x), and Ωis the set of middle objects connecting the decomposed meta-paths p1 and p−1 2 , count(y, x) is the total number of paths between y and the middle object x, y could be m or e. The above similarity measures can also be applied to homogeneous networks that do not differentiate the neighbor types. 4.2.4 Global Semantic Feafure Generation A morph tends to have higher temporal correlation with its real target, and share more similar topics compared to other irrelevant targets. Therefore, we propose to incorporate temporal information into similarity measures to generate global semantic features. Let T = t1 ∪t2 ∪... ∪tN be a set of temporal slots (i.e. by day), E be the set of target candidates for each morph m. Then for each ti ∈T, and each e ∈E, the local semantic features simti(m, e) is extracted based only on the information posted within ti using one of the similarity measures introduced in Section 4.2.3. Then we propose two approaches to generate global semantic features. The first approach is adding the similarity score between m and e in each temporal slot to attain the first set of global features: simglobal sum(m, e) = X ti∈T simti(m, e). The second method first normalizes the similarity score in each temporal slot ti, them sum the normalized scores to generate the second set of global features, which can be calculated as simglobal norm(m, e) = X ti∈T normti(m, e). where normti(m, e) = simti(m,e) P e∈E simti(m,e). 4.2.5 Integrate Cross Source/Cross Genre Information Due to internet information censorship or surveillance, users may need to use morphs to post sensitive information. For example, the Chinese Weibo message “都进去了,还要贡着不厚吗(Already put in prison, still need to serve Buhou?” include a morph 不厚(Buhou). In contrast, users are less restricted in some other uncensored social media such as Twitter. For example, the tweet from Twitter “...把薄熙来称作“平西王”或者“不厚”... (...call Bo Xilai“peace west king” or “buhou”...)” contains both the morph and the real target 薄熙 来(Bo Xilai). Therefore, we propose to integrate information from another source (e.g. Twitter) to help resolution of sensitive morphs in Weibo. Another difficulty from morph resolution in micro-blogging is that tweets are only allowed to contain maximum 140 characters with a lot of noise and diverse topics. The shortness and diversity of tweets may limit the power of content analysis for semantic feature extraction. However, formal genres such as web documents are cleaner and contain richer contexts, thus can provide more topically related information. In this work, we also exploit the background web documents from the 1087 embedded URLs in tweets to enrich information network construction. After applying the same annotation techniques as tweets for uncensored data sets, sentence-level co-occurrence relations are extracted and integrated into the network as shown in Figure 3. 4.3 Social Features It has been shown that there exist correlation between neighbors in social networks (Anagnostopoulos et al., 2008; Wen and Lin, 2010). Because of such social correlation, close social neighbors in social media such as Twitter and Weibo may post similar information, or share similar opinion. Therefore, we can utilize social correlation to assist in resolving morphs. As social correlation can be defined as a function of social distance between a pair of users, we use social distance as a proxy to social correlation in our approach. The social distance between user i and j is defined by considering the degree of separation in their interaction (e.g. retweeting and mentioning) and the amount of the interaction. Similar definition has been shown effective in characterizing social distance in social networks extracted from communication data (Lin et al., 2012; Wen and Lin, 2010). Specifically, it is dist(i, j) = PK−1 k=1 1 strength(vk,vk+1), where v1, ..., vk are the nodes on the shortest path from user i to user j, and strength(vk, vk+1) measures the strength of interactions between vk and vk+1 as: strength(i, j) = log(Xij) maxj log(Xij), where Xij is the total interactions between user i and j, including both retweeting and mentioning (If Xij < 10, we set strength(i, j) = 0). We integrate social correlation and temporal information to define our social features. The intuition is that when a morph is used by an user, the real target may also in the posts by the user or his/her close friends within a certain time period. Let T be the set of temporal slots a morph m occurs, Ut be the set of users whose posts include m in slot t where t ∈T, and Uc be the set of close friends (i.e., social distance < 0.5) for Ut. The social features are defined as s(m, e) = P t∈T f(e, t, Ut, Uc) |T| . where f(e, t, Ut, Uc) is a indicator function which return 1 if one of the users in Ut or Uc posts tweets include the target candidate e within 7 days before t. 4.4 Learning-to-Rank Similar to (Hsiung et al., 2005; Sun et al., 2011a), we then model the probability of linkage prediction between a morph m and its target candidate e as a function incorporating the surface, semantic and social features. Given a training pair ⟨m, e⟩, we choose the standard logistic regression model to learn weights for the features defined above. The learnt model is used to predict the probability of linking an unseen morph and its target candidate. Based on the descending ranking order of the probability, we select top k candidates as the final answers based on the answer size k. 5 Experiments Next, we present the experiment under various settings shown in Table 3, and the impacts of cross source and cross genre information. 5.1 Data and Evaluation Metric We collected 1, 553, 347 tweets from Chinese Sina Weibo from May 1 to June 30 to construct the censored data set, and retrieved 66, 559 web documents from the embedded URLs in tweets as the initial uncensored data set. Retweets and redundant web documents are filtered to ensure more reliable frequency counting of co-occurrence relations. We asked two native Chinese annotators to analyze the data, and construct a test set consisted of 107 morph entities (81 persons and 26 locations) and their real targets as our references. We verified the references by Web resources including the summary of popular morphs in Wikipedia 2. In addition, we used 23 sensitive morphs and the entities that appear in the tweets as queries and retrieved 25, 128 Chinese tweets from 10% Twitter feeds within the same time period, as well as 7, 473 web documents from the embedded URLs and added them into the uncensored data set. To evaluate the system performance, we use leave-one-out cross validation by computing accuracy as Acc@k = Ck Q , where Ck is the total number of correctly resolved morphs at top k ranked answers, and Q is the total number of morph queries. We consider a morph as correctly resolved at the top k answers if the top k answer set contains the real target of the morph. 2http://zh.wikipedia.org/wiki/中国大陆网络语言列表 1088 Feature sets Descriptions Surf Surface features HomB Semantic features extracted from homogeneous CN, PC, PRW, and KLD HomE HomB + semantic features extracted from homogeneous NCN and PRW/cosine HetB Semantic features extracted from heterogeneous CN, PC, PRW and KLD HetE HetB + Semantic features extracted from heterogeneous NCN and PRW/cosine Glob∗ Global semantic features Social Social network features Table 3: Description of feature sets. ∗Glob only uses the same set of similarity measures when combined with other semantic features. 5.2 Resolution Performance 5.2.1 Single Genre Information We first study the contributions of each set of surface and semantic features, as shown in the first five rows in Table 4. The poor performance based on surface features shows that morph resolution task is very challenging since 70% of morphs are not orthographically similar to their real targets. Thus, capturing a morph’s semantic meaning is crucial. Overall, the results demonstrate the effectiveness of our proposed methods. Specifically, comparing “HomB” and “HetB”, “HomE” and “HetE”, we can see that the semantic features based on heterogeneous networks have advantages over those based on homogeneous networks. This corroborates that different neighbor sets contribute differently, and such discrepancies should be captured. And comparisions of “HomB” and “HomE”, “HetB” and “HetE”demonstrate the effectiveness of our two new proposed measures. To evaluate the importance of each similarity measures, we delete the semantic features obtained from each measure in “HetE” and re-evaluate the system. We find that NCN is the most effective measure, while KLD is the least important one. Further adding the global semantic features significantly improves the performance. This indicates that capturing both temporal correlations and semantics of morphing simultaneously are important for morph resolution. Table 5 shows that combination of surface and semantic features further improves the performance, showing that they are complementary. For example, using only surface features, the real target “乔布斯(Steve Jobs)” of the morph “乔帮 主(Qiao Boss)” is not top ranked since some other candidates such as “乔治(George)” are more orthographically similar. However, “Steve Jobs” is ranked top when combined with semantic features. Features Surf HomB HomE HetB HetE Acc@1 0.028 0.201 0.192 0.224 0.252 Acc@5 0.159 0.313 0.369 0.393 0.421 Acc@10 0.243 0.346 0.407 0.439 0.467 Acc@20 0.313 0.411 0.467 0.50 0.523 Features + Glob + Glob + Glob + Glob Acc@1 0.230 0.285 0.257 0.285 Acc@5 0.402 0.407 0.449 0.458 Acc@10 0.435 0.458 0.50 0.495 Acc@20 0.486 0.523 0.565 0.542 Table 4: The System Performance Based on Each Single Feature Set. Features Surf + HomB Surf + HomE Surf + HetB Surf + HetE Acc@1 0.234 0.238 0.262 0.276 Acc@5 0.416 0.444 0.481 0.519 Acc@10 0.477 0.505 0.533 0.570 Acc@20 0.519 0.561 0.565 0.598 Features + Glob + Glob + Glob + Glob Acc@1 0.290 0.341 0.322 0.346 Acc@5 0.505 0.495 0.528 0.533 Acc@10 0.551 0.551 0.579 0.584 Acc@20 0.594 0.603 0.636 0.631 Table 5: The System Performance Based on Combinations of Surface and Semantic Features. 5.2.2 Cross Source and Cross Genre Information We integrate the cross source information from Twitter, and the cross genre information from web documents into Weibo tweets for information network construction, and extract a new set of semantic features. Table 6 shows that further gains can be achieved. Notice that integrating tweets from Twitter mainly improves the ranking for top k where k > 1. This is because Weibo dominates our dataset, and in Weibo many of these sensitive morphs are mostly used with their traditional meanings instead of the morph senses. Further performance improvement is achieved by integrating information from background formal web documents which can provide richer context and relations. 1089 Features Surf + HomB + Glob Surf + HomE + Glob Surf + HetB + Glob Surf + HetE + Glob Acc@1 0.290 0.341 0.322 0.346 Acc@5 0.505 0.495 0.528 0.533 Acc@10 0.551 0.551 0.579 0.584 Acc@20 0.594 0.603 0.636 0.631 Features + Twitter + Twitter + Twitter + Twitter Acc@1 0.308 0.336 0.336 0.346 Acc@5 0.514 0.519 0.547 0.565 Acc@10 0.579 0.594 0.594 0.636 Acc@20 0.631 0.640 0.668 0.668 Features + Web + Web + Web + Web Acc@1 0.327 0.360 0.341 0.379 Acc@5 0.528 0.519 0.565 0.575 Acc@10 0.594 0.589 0.622 0.645 Acc@20 0.631 0.650 0.678 0.678 Table 6: The System Performance of Integrating Cross Source and Cross Genre Information. 5.2.3 Effects of Social Features Table 7 shows that adding social features can improve the best performance achieved so far. This is because a group of people with close relationships may share similar opinion. As an example, two tweets “...of course the reputation of Buhou is a little too high! //@User1: //@User2: Chongqing event tells us...)” and “...do not follow Bo Xilai...@User1...) are from two users in the same social group.One includes a morph “Buhou” and the other includes its target “Bo Xilai”. Features Surf + HomB + Glob + Twitter + Web Surf + HomE + Glob + Twitter + Web Surf + HetB + Glob + Twitter + Web Surf + HetE + Glob + Twitter + Web Acc@1 0.327 0.360 0.341 0.379 Acc@5 0.528 0.519 0.565 0.575 Acc@10 0.594 0.589 0.622 0.645 Acc@20 0.631 0.650 0.678 0.678 Features + Social + Social + Social + Social Acc@1 0.336 0.369 0.365 0.379 Acc@5 0.537 0.547 0.589 0.594 Acc@10 0.594 0.601 0.645 0.659 Acc@20 0.645 0.664 0.701 0.701 Table 7: The Effects of Social Features. 5.3 Effects of Candidate Detection The performance with and without candidate detection step (using all features) is shown in Table 8. The gain is small since the combination of all features in the learning to rank framework can already well capture the relationship between a morph and a target candidate. Nevertheless, the temporal distribution assumption is effective. It helps to filter out 80% of unrelated targets and speed up the system 5 times, while retain 98.5% of the morph candidates that can be detected. System Acc@1 Acc@5 Acc@10 Acc@20 Without 0.365 0.579 0.645 0.696 With 0.379 0.594 0.659 0.701 Table 8: The Effects of Temporal Constraint We also attempted using topic modeling approach to detect target candidates. Due to the large amount of data, we first split the data set on a daily basis, then applied Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999). Named entities which co-occur at least δ times with a morph query in the same topic are selected as its target candidates. As shown in Table9 (K is the number of predefined topics), PLSA is not quite effective mainly because traditional topic modeling approaches do not perform well on short texts from social media. Therefore, in this paper we choose a simple method based on temporal distribution to detect target candidates. Method All Temporal PLSA( PLSA( K = 5 K = 5 δ = 1) δ = 2) Acc 0.935 0.921 0.935 0.925 No. 8, 111 1, 964 6, 380 4, 776 Method PLSA( PLSA( PLSA( PLSA( K = 10 K = 10 K = 20 K = 20 δ = 1) δ = 2) δ = 1) δ = 2) Acc 0.935 0.907 0.888 0.757 No. 5, 117 3, 138 3, 702 1, 664 Table 9: Accuracy of Target Candidate Detection 5.4 Discussions Compared with the standard alias detection (“Surf+HomB”) approach (Hsiung et al., 2005), our proposed approach achieves significantly better performance (99.9% confidence level by the Wilcoxon Matched-Pairs Signed-Ranks Test for Acc@1). We further explore two types of factors which may affect the system performance as follows. One important aspect affecting the resolution performance is the morph & non-morph ambiguity. We categorize a morph query as “Unique” if the string is mainly used as a morph when it occurs, such as “薄督(Bodu)” which is used to refer to “Bo Xilai”; otherwise as “Common” (e.g. “宝宝(Baby)” ,“校长(President)” ). Table 10 presents the separate scores for these two categories. We can see that the morphs in “Unique” 1090 category have much better resolution performance than those in “Common” category. Category Number Acc@1 Acc@5 Acc@10 Acc@20 Unique 72 0.479 0.715 0.771 0.819 Common 35 0.171 0.343 0.40 0.429 Table 10: Performance of Two Categories We also investigate the effects of popularity of morphs on the resolution performance. We split the queries into 5 bins with equal size based on the non-descending frequency, and evaluate Acc@1 separately. As shown in Table11, we can see that the popularity is not highly correlated with the performance. Rank 0 ∼ 20% 20% ∼ 40% 40% ∼ 60% 60% ∼ 80% 80% ∼ 100% All 0.333 0.476 0.341 0.429 0.318 Unique 0.321 0.679 0.379 0.571 0.483 Common 0.214 0.214 0.071 0.071 0.286 Table 11: Effects of Popularity of Morphs 6 Related Work To analyze social media behavior under active censorship, (Bamman et al., 2012) automatically discovered politically sensitive terms from Chinese tweets based on message deletion analysis. In contrast, our work goes beyond target idendification by resolving implicit morphs to their real targets. Our work is closely related to alias detection (Hsiung et al., 2005; Pantel, 2006; Bollegala et al., 2011; Holzer et al., 2005). We demonstrated that state-of-the-art alias detection methods did not perform well on morph resolution. In this paper we exploit cross-genre information and social correlation to measure semantic similarity. (Yang et al., 2011; Huang et al., 2012) also showed the effectiveness of exploiting information from formal web documents to enhance tweet summarization and tweet ranking. Other similar research lines are the TAC-KBP Entity Linking (EL) (Ji et al., 2010; Ji et al., 2011), which links a named entity in news and web documents to an appropriate knowledge base (KB) entry, the task of mining name translation pairs from comparable corpora (Udupa et al., 2009; Ji, 2009; Fung and Yee, 1998; Rapp, 1999; Shao and Ng, 2004; Hassan et al., 2007) and the link prediction problem (Adamic and Adar, 2001; LibenNowell and Kleinberg, 2003; Sun et al., 2011b; Hasan et al., 2006; Wang et al., 2007; Sun et al., 2011a). Most of the work focused on unstructured or structured data with clean and rich relations (e.g. DBLP). In contrast, our work constructs heterogeneous information networks from unstructured, noisy multi-genre text without explicit entity attributes. 7 Conclusion and Future Work To the best of our knowledge, this is the first work of resolving implicit information morphs from the data under active censorship. Our promising results can well serve as a benchmark for this new problem. Both of the Meta-path based and social correlation based semantic similarity measurements are proven powerful and complementary. In this paper we have focused on entity morphs. In the future we will extend our method to discover other types of information morphs, such as events and nominal mentions. In addition, automatic identification of candidate morphs is another challenging task, especially when the mentions are ambiguous and can also refer to other real entities. Our ongoing work includes identifying candidate morphs from scratch, as well as discovering morphs for a given target based on anomaly analysis and textual coherence modeling. Acknowledgments Thanks to the three anonymous reviewers for their insightful comments. This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF- 09-2-0053 (NS-CTA), the U.S. NSF CAREER Award under Grant IIS-0953149, the U.S. NSF EAGER Award under Grant No. IIS-1144111, the U.S. DARPA FA8750-13-2-0041 - Deep Exploration and Filtering of Text (DEFT) Program, the U.S. DARPA under Agreement No. W911NF-12-C-0028, CUNY Junior Faculty Award, NSF IIS-0905215, CNS0931975, CCF-0905014, and MIAS, a DHS-IDS Center for Multimodal Information Access and Synthesis at UIUC. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 1091 References Lada A. Adamic and Eytan Adar. 2001. Friends and neighbors on the web. SOCIAL NETWORKS, 25:211–230. Aris Anagnostopoulos, Ravi Kumar, and Mohammad Mahdian. 2008. Influence and correlation in social networks. In KDD, pages 7–15. David Bamman, Brendan O’Connor, and Noah A. Smith. 2012. Censorship and deletion practices in chinese social media. First Monday, 17(3). Patrick Barwise and Se´an Meehan. 2010. The one thing you must get right when building a brand. Harvard Business Review, 88(12):80–84. D. Bollegala, Y. Matsuo, and M. Ishizuka. 2011. Automatic discovery of personal name aliases from the web. Knowledge and Data Engineering, IEEE Transactions on, 23(6):831–844. Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation, StatMT ’08, pages 224–232. Pascale Fung and Lo Yuen Yee. 1998. An ir approach for translating new words from nonparallel, comparable texts. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 1, ACL ’98, pages 414–420. Mohammad Al Hasan, Vineet Chaoji, Saeed Salem, and Mohammed Zaki. 2006. Link prediction using supervised learning. In In Proc. of SDM 06 workshop on Link Analysis, Counterterrorism and Security. Ahmed Hassan, Haytham Fahmy, and Hany Hassan. 2007. Improving named entity translation by exploiting comparable and parallel corpora. In RANLP. Daniel S. Hirschberg. 1977. Algorithms for the longest common subsequence problem. J. ACM, 24(4):664–675. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’99, pages 50–57. Ralf Holzer, Bradley Malin, and Latanya Sweeney. 2005. Email alias detection using social network analysis. In Conference on Knowledge Discovery in Data: Proceedings of the 3 rd international workshop on Link discovery, volume 21, pages 52–57. Paul Hsiung, Andrew Moore, Daniel Neill, and Jeff Schneider. 2005. Alias detection in link data sets. In Proceedings of the International Conference on Intelligence Analysis, May. Hongzhao Huang, Arkaitz Zubiaga, Heng Ji, Hongbo Deng, Dong Wang, Hieu Khac Le, Tarek F. Abdelzaher, Jiawei Han, Alice Leung, John Hancock, and Clare R. Voss. 2012. Tweet ranking based on heterogeneous networks. In COLING, pages 1239– 1256. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL, pages 254–262. H. Ji, R. Grishman, H.T. Dang, K. Griffitt, and J. Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Text Analysis Conference (TAC) 2010. H. Ji, R. Grishman, and H.T. Dang. 2011. Overview of the tac 2011 knowledge base population track. In Text Analysis Conference (TAC) 2011. Heng Ji. 2009. Mining name translations from comparable corpora by creating bilingual information networks. In Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Non-parallel Corpora, BUCC ’09, pages 34–37. David Liben-Nowell and Jon Kleinberg. 2003. The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management, CIKM ’03, pages 556–559. Ching-Yung Lin, Lynn Wu, Zhen Wen, Hanghang Tong, Vicky Griffiths-Fisher, Lei Shi, and David Lubensky. 2012. Social network analysis in enterprise. Proceedings of the IEEE, 100(9):2759–2776. Vincent Ng. 2010. Supervised noun phrase coreference research: the first fifteen years. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 1396– 1411. Patrick Pantel. 2006. Alias detection in malicious environments. In AAAI Fall Symposium on Capturing and Using Patterns for Evidence Detection, pages 14–20. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, ACL ’99, pages 519– 526. Li Shao and Hwee Tou Ng. 2004. Mining new word translations from comparable corpora. In Proceedings of the 20th international conference on Computational Linguistics, COLING ’04. Yizhou Sun, Rick Barber, Manish Gupta, Charu C. Aggarwal, and Han Jiawei. 2011a. Co-author relationship prediction in heterogeneous bibliographic networks. In Proceedings of the 2011 International Conference on Advances in Social Networks Analysis and Mining, ASONAM ’11, pages 121–128. 1092 Yizhou Sun, Jiawei Han, Xifeng Yan, Philip S. Yu, and Tianyi Wu. 2011b. Pathsim: Meta path-based top-k similarity search in heterogeneous information networks. PVLDB, 4(11):992–1003. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 173–180. Raghavendra Udupa, K. Saravanan, A. Kumaran, and Jagadeesh Jagarlamudi. 2009. Mint: a method for effective and scalable mining of named entity transliterations from large comparable corpora. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’09, pages 799–807. Robert A. Wagner and Michael J. Fischer. 1974. The string-to-string correction problem. J. ACM, 21(1):168–173. Chao Wang, Venu Satuluri, and Srinivasan Parthasarathy. 2007. Local probabilistic models for link prediction. In Proceedings of the 2007 Seventh IEEE International Conference on Data Mining, ICDM ’07, pages 322–331. Zhen Wen and Ching-Yung Lin. 2010. On the quality of inferring interests from social neighbors. In KDD, pages 373–382. Zi Yang, Keke Cai, Jie Tang, Li Zhang, Zhong Su, and Juanzi Li. 2011. Social context summarization. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, SIGIR ’11, pages 255–264. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Proceedings of the second SIGHAN workshop on Chinese language processing - Volume 17, SIGHAN ’03, pages 184–187. 1093
2013
107
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1094–1104, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Learning to Extract International Relations from Political Context Brendan O’Connor School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Brandon M. Stewart Department of Government Harvard University Cambridge, MA 02139, USA [email protected] Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Abstract We describe a new probabilistic model for extracting events between major political actors from news corpora. Our unsupervised model brings together familiar components in natural language processing (like parsers and topic models) with contextual political information— temporal and dyad dependence—to infer latent event classes. We quantitatively evaluate the model’s performance on political science benchmarks: recovering expert-assigned event class valences, and detecting real-world conflict. We also conduct a small case study based on our model’s inferences. A supplementary appendix, and replication software/data are available online, at: http://brenocon.com/irevents 1 Introduction The digitization of large news corpora has provided an unparalleled opportunity for the systematic study of international relations. Since the mid1960s political scientists have used political events data, records of public micro-level interactions between major political actors of the form “someone does something to someone else” as reported in the open press (Schrodt, 2012), to study the patterns of interactions between political actors and how they evolve over time. Scaling this data effort to modern corpora presents an information extraction challenge: can a structured collection of accurate, politically relevant events between major political actors be extracted automatically and efficiently? And can they be grouped into meaningful event types with a low-dimensional structure useful for further analysis? We present an unsupervised approach to event extraction, in which political structure and linguistic evidence are combined. A political context model of the relationship between a pair of political actors imposes a prior distribution over types of linguistic events. Our probabilistic model infers latent frames, each a distribution over textual expressions of a kind of event, as well as a representation of the relationship between each political actor pair at each point in time. We use syntactic preprocessing and a logistic normal topic model, including latent temporal smoothing on the political context prior. We apply the model in a series of comparisons to benchmark datasets in political science. First, we compare the automatically learned verb classes to a pre-existing ontology and hand-crafted verb patterns from TABARI,1 an open-source and widely used rule-based event extraction system for this domain. Second, we demonstrate correlation to a database of real-world international conflict events, the Militarized Interstate Dispute (MID) dataset (Jones et al., 1996). Third, we qualitatively examine a prominent case not included in the MID dataset, Israeli-Palestinian relations, and compare the recovered trends to the historical record. We outline the data used for event discovery (§2), describe our model (§3), inference (§4), evaluation (§5), and comment on related work (§6). 2 Data The model we describe in §3 is learned from a corpus of 6.5 million newswire articles from the English Gigaword 4th edition (1994–2008, Parker et al., 2009). We also supplement it with a sample of data from the New York Times Annotated Corpus (1987–2007, Sandhaus, 2008).2 The Stan1Available from the Penn State Event Data Project: http://eventdata.psu.edu/ 2For arbitrary reasons this portion of the data is much smaller (we only parse the first five sentences of each article, while Gigaword has all sentences parsed), resulting in less than 2% as many tuples as from the Gigaword data. 1094 ford CoreNLP system,3 under default settings, was used to POS-tag and parse the articles, to eventually produce event tuples of the form ⟨s, r, t, wpredpath⟩ where s and r denote “source” and “receiver” arguments, which are political actor entities in a predefined set E, t is a timestep (i.e., a 7-day period) derived from the article’s published date, and wpredpath is a textual predicate expressed as a dependency path that typically includes a verb (we use the terms “predicate-path” and “verb-path” interchangeably). For example, on January 1, 2000, the AP reported “Pakistan promptly accused India,” from which our preprocessing extracts the tuple ⟨PAK, IND, 678, accuse dobj ←−−⟩. (The path excludes the first source-side arc.) Entities and verb paths are identified through the following sets of rules. Named entity recognition and resolution is done deterministically by finding instances of country names from the CountryInfo.txt dictionary from TABARI,4 which contains proper noun and adjectival forms for countries and administrative units. We supplement these with a few entries for international organizations from another dictionary provided by the same project, and clean up a few ambiguous names, resulting in a final actor dictionary of 235 entities and 2,500 names. Whenever a name is found, we identify its entity’s mention as the minimal noun phrase that contains it; if the name is an adjectival or nounnoun compound modifier, we traverse any such amod and nn dependencies to the noun phrase head. Thus NATO bombing, British view, and Palestinian militant resolve to the entity codes IGONAT, GBR, and PSE respectively. We are interested in identifying actions initiated by agents of one country targeted towards another, and hence concentrate on verbs, analyzing the “CCprocessed” version of the Stanford Dependencies (de Marneffe and Manning, 2008). Verb paths are identified by looking at the shortest dependency path between two mentions in a sentence. If one of the mentions is immediately dominated by a nsubj or agent relation, we consider that the Source actor, and the other mention is the Receiver. The most common cases are simple direct objects and prepositional arguments like talk 3http://nlp.stanford.edu/software/ corenlp.shtml 4http://eventdata.psu.edu/software. dir/dictionaries.html. prep with ←−−−−and fight prep alongside ←−−−−−−(“talk with R,” “fight alongside R”) but many interesting multiword constructions also result, such as reject dobj ←−−allegation poss ←−−(“rejected R’s allegation”) or verb chains as in offer xcomp ←−−help dobj ←−−(“offer to help R”). We wish to focus on instances of directly reported events, so attempt to remove factively complicated cases such as indirect reporting and hypotheticals by discarding all predicate paths for which any verb on the path has an off-path governing verb with a non-conj relation. (For example, the verb at the root of a sentence always survives this filter.) Without this filter, the ⟨s, r, w⟩ tuple ⟨USA, CUB, want xcomp ←−−seize dobj ←−−⟩is extracted from the sentence “Parliament Speaker Ricardo Alarcon said the United States wants to seize Cuba and take over its lands”; the filter removes it since wants is dominated by an off-path verb through say ccomp ←−−wants. The filter was iteratively developed by inspecting dozens of output examples and their labelings under successive changes to the rules. Finally, only paths length 4 or less are allowed, the final dependency relation for the receiver may not be nsubj or agent, and the path may not contain any of the dependency relations conj, parataxis, det, or dep. We use lemmatized word forms in defining the paths. Several document filters are applied before tuple extraction. Deduplication removes 8.5% of articles.5 For topic filtering, we apply a series of keyword filters to remove sports and finance news, and also apply a text classifier for diplomatic and military news, trained on several hundred manually labeled news articles (using ℓ1-regularized logistic regression with unigram and bigram features). Other filters remove non-textual junk and non-standard punctuation likely to cause parse errors. For experiments we remove tuples where the source and receiver entities are the same, and restrict to tuples with dyads that occur at least 500 times, and predicate paths that occur at least 10 times. This yields 365,623 event tuples from 235,830 documents, for 421 dyads and 10,457 unique predicate paths. We define timesteps to be 7-day periods, resulting in 1,149 discrete 5We use a simple form of shingling (ch. 3, Rajaraman and Ullman, 2011): represent a document signature as its J = 5 lowercased bigrams with the lowest hash values, and reject a document with a signature that has been seen before within the same month. J was manually tuned, as it affects the precision/recall tradeoff. 1095 i k ⌘k,s,r,t ✓s,r,t z φ wpredpath r Language Model P(Text | Event Type) Context Model P(Event Type | Context) b σ2 k s s "Source" entity r "Receiver" entity t Timestep i Event tuple k Frame k k βk,s,r,t−1 βk,s,r,t ⌧2 ↵k Figure 1: Directed probabilistic diagram of the model for one (s, r, t) dyad-time context, for the smoothed model. timesteps (1987 through 2008, though the vast majority of data starts in 1994). 3 Model We design two models to learn linguistic event classes over predicate paths by conditioning on real-world contextual information about international politics, p(wpredpath | s, r, t), leveraging the fact there tends to be dyadic and temporal coherence in international relations: the types of actions that are likely to occur between nations tend to be similar within the same dyad, and usually their distribution changes smoothly over time. Our model decomposes into two submodels: a Context submodel, which encodes how political context affects the probability distribution over event types, and a Language submodel, for how those events are manifested as textual predicate paths (Figure 1). The overall generative process is as follows. We color global parameters for a frame blue, and local context parameters red, and use the term “frame” as a synonym for “event type.” The fixed hyperparameter K denotes the number of frames. • The context model generates a frame prior θs,r,t for every context (s, r, t). • Language model: • Draw lexical sparsity parameter b from a diffuse prior (see §4). • For each frame k, draw a multinomial distribution of dependency paths, φk ∼Dir(b/V ) (where V is the number of dependency path types). • For each (s, r, t), for every event tuple i in that context, • Sample its frame z(i) ∼Mult(θs,r,t). • Sample its predicate realization w(i) predpath ∼Mult(φz(i)). Thus the language model is very similar to a topic model’s generation of token topics and wordtypes. We use structured logistic normal distributions to represent contextual effects. The simplest is the vanilla (V) context model, • For each frame k, draw global parameters from diffuse priors: prevalence αk and variability σ2 k. • For each (s, r, t), • Draw ηk,s,r,t ∼N(αk, σ2 k) for each frame k. • Apply a softmax transform, θk,s,r,t = exp ηk,s,r,t PK k′=1 exp ηk′,s,r,t Thus the vector η∗,s,r,t encodes the relative logodds of the different frames for events appearing in the context (s, r, t). This simple logistic normal prior is, in terms of topic models, analogous to the asymmetric Dirichlet prior version of LDA in Wallach et al. (2009), since the αk parameter can learn that some frames tend to be more likely than others. The variance parameters σ2 k control admixture sparsity, and are analogous to a Dirichlet’s concentration parameter. Smoothing Frames Across Time The vanilla model is capable of inducing frames through dependency path co-occurences, when multiple events occur in a given context. However, many dyad-time slices are very sparse; for example, most dyads (all but 18) have events in fewer than half the time slices in the dataset. One solution is to increase the bucket size (e.g., to months); however, previous work in political science has demonstrated that answering questions of interest about reciprocity dynamics requires recovering the events at weekly or even daily granularity (Shellman, 2004), and in any case wide buckets help only so much for dyads with fewer events or less media attention. Therefore we propose a smoothed frames (SF) model, in which the 1096 frame distribution for a given dyad comes from a latent parameter β∗,s,r,t that smoothly varies over time. For each (s, r), draw the first timestep’s values as βk,s,r,1 ∼N(0, 100), and for each context (s, r, t > 1), • Draw βk,s,r,t ∼N(βk,s,r,t−1, τ 2) • Draw ηk,s,r,t ∼N(αk + βk,s,r,t, σ2 k) Other parameters (αk, σ2 k) are same as the vanilla model. This model assumes a random walk process on β, a variable which exists even for contexts that contain no events. Thus inferences about η will be smoothed according to event data at nearby timesteps. This is an instance of a linear Gaussian state-space model (also known as a linear dynamical system or dynamic linear model), and is a convenient formulation because it has well-known exact inference algorithms. Dynamic linear models have been used elsewhere in machine learning and political science to allow latent topic frequencies (Blei and Lafferty, 2006; Quinn et al., 2010) and ideological positions (Martin and Quinn, 2002) to smoothly change over time, and thus share statistical strength between timesteps. 4 Inference After randomly initializing all ηk,s,r,t, inference is performed by a blocked Gibbs sampler, alternating resamplings for three major groups of variables: the language model (z,φ), context model (α, γ, β, p), and the η, θ variables, which bottleneck between the submodels. The language model sampler sequentially updates every z(i) (and implicitly φ via collapsing) in the manner of Griffiths and Steyvers (2004): p(z(i)|θ, w(i), b) ∝θs,r,t,z(nw,z + b/V )/(nz + b), where counts n are for all event tuples besides i. For the context model, α is conjugate resampled as a normal mean. The random walk variables β are sampled with the forward-filteringbackward-sampling algorithm (FFBS; Harrison and West, 1997; Carter and Kohn, 1994); there is one slight modification of the standard dynamic linear model that the zero-count weeks have no η observation; the Kalman filter implementation is appropriately modified to handle this. The η update step is challenging since it is a nonconjugate prior to the z counts. Logistic normal distributions were introduced to text modeling by Blei and Lafferty (2007), who developed a variational approximation; however, we find that experimenting with different models is easier in the Gibbs sampling framework. While Gibbs sampling for logistic normal priors is possible using auxiliary variable methods (Mimno et al., 2008; Holmes and Held, 2006; Polson et al., 2012), it can be slow to converge. We opt for the more computationally efficient approach of Zeger and Karim (1991) and Hoff (2003), using a Laplace approximation to p(η | ¯η, Σ, z), which is a mode-centered Gaussian having inverse covariance equal to the unnormalized log-posterior’s negative Hessian (§8.4 in Murphy, 2012). We find the mode with the linear-time Newton algorithm from Eisenstein et al. (2011), and sample in linear time by only using the Hessian’s diagonal as the inverse covariance (i.e., an axis-aligned normal), since a full multivariate normal sample requires a cubic-time-to-compute Cholesky root of the covariance matrix. This η∗sample is a proposal for a Metropolis-within-Gibbs step, which is moved to according to the standard Metropolis-Hastings acceptance rule. Acceptance rates differ by K, ranging approximately from 30% (K = 100) to nearly 100% (small K). Finally, we use diffuse priors on all global parameters, conjugate resampling variances τ 2, σk once per iteration, and slice sampling (Neal, 2003) the Dirichlet concentration b every 100 iterations. Automatically learning these was extremely convenient for model-fitting; the only hyperparameter we set manually was K. It also allowed us to monitor the convergence of dispersion parameters to help debug and assess MCMC mixing. For other modeling and implementation details, see the online appendix and software. 5 Experiments We fit the two models on the dataset described in §2, varying the number of frames K, with 8 or more separate runs for each setting. Posteriors are saved and averaged from 11 Gibbs samples (every 100 iterations from 9,000 to 10,000) for analysis. We present intrinsic (§5.1) and extrinsic (§5.2) quantitative evaluations, and a qualitative case study (§5.4). 5.1 Lexical Scale Impurity In the international relations literature, much of the analysis of text-based events data makes use of a unidimensional conflict to cooperation scale. A popular event ontology in this domain, CAMEO, consists of around 300 different event types, each 1097 given an expert-assigned scale in the range from −10 to +10 (Gerner et al., 2002), derived from a judgement collection experiment in Goldstein (1992). The TABARI pattern-based event extraction program comes with a list of almost 16,000 manually engineered verb patterns, each assigned to one CAMEO event type. It is interesting to consider the extent to which our unsupervised model is able to recover the expert-designed ontology. Given that many of the categories are very fine-grained (e.g. “Express intent to de-escalate military engagement”), we elect to measure model quality as lexical scale purity: whether all the predicate paths within one automatically learned frame tend to have similar gold-standard scale scores. (This measures cluster cohesiveness against a one-dimensional continuous scale, instead of measuring cluster cohesiveness against a gold-standard clustering as in VI, Rand index, or purity.) To calculate this, we construct a mapping between our corpus-derived verb path vocabulary and the TABARI verb patterns, many of which contain one to several word stems that are intended to be matched in surface order. Many of our dependency paths, when traversed from the source to receiver direction, also follow surface order, due to English’s SVO word order.6 Therefore we convert each path to a word sequence and match against the TABARI lexicon—plus a few modifications for differences in infinitives and stemming—and find 528 dependency path matches. We assign each path w a gold-standard scale g(w) by resolving through its matching pattern’s CAMEO code. We formalize lexical scale impurity as the average absolute difference of scale values between two predicate paths under the same frame. Specifically, we want a token-level posterior expectation E(|g(wi) −g(wj)| | zi = zj, wi ̸= wj) (1) which is taken over pairs of path instances (i, j) where both paths wi, wj are in M, the set of verb paths that were matched between the lexicons. This can be reformulated at the type level as:7 1 N X k X w,v∈M w̸=v nw,k nv,k |g(w) −g(v)| (2) 6There are plenty of exceptions where a Source-toReceiver path traversal can have a right-to-left move, such as dependency edges for posessives. This approach can not match them. 7Derivation in supplementary appendix. where n refers to the averaged Gibbs samples’ counts of event tuples having frame k and a particular verb path,8 and N is the number of token comparisons (i.e. the same sum, but with a 1 replacing the distance). The worst possible impurity is upper bounded at 20 (= max(g(w)) − min(g(w))) and the best possible is 0. We also compute a randomized null hypothesis to see how low impurity can be by chance: each of ∼1000 simulations randomly assigns each path in M to one of K frames (all its instances are exclusively assigned to that frame), and computes the impurity. On average the impurity is same at all K, but variance increases with K (since small clusters might by chance get a highly similar paths in them), necessitating this null hypothesis analysis. We report the 5th percentile over simulations. 5.2 Conflict Detection Political events data has shown considerable promise as a tool for crisis early warning systems (O’Brien, 2010; Brandt et al., 2011). While conflict forecasting is a potential application of our model, we conduct a simpler prediction task to validate whether the model is learning something useful: based on news text, tell whether or not an armed conflict is currently happening. For a gold standard, we use the Militarized Interstate Dispute (MID) dataset (Jones et al., 1996; Ghosn et al., 2004), which documents historical international disputes. While not without critics, the MID data is the most prominent dataset in the field of international relations. We use the Dyadic MIDs, each of which ranks hostility levels between pairs of actors on a five point scale over a date interval; we define conflict to be the top two categories “Use of Force” (4) and “War” (5). We convert the data into a variable ys,r,t, the highest hostility level reached by actor s directed towards receiver r in the dispute that overlaps with our 7-day interval t, and want to predict the binary indicator 1{ys,r,t ≥4}. For the illustrative examples (USA to Iraq, and the Israel-Palestine example below) we use results from a smaller but more internally comparable dataset consisting of the 2 million Associated Press articles within the Gigaword corpus. For an example of the MID data, see Figure 2, which depicts three disputes between the US and 8Results are nearly identical whether we use counts averaged across samples (thus giving posterior marginals), or simply use counts from a single sample (i.e., iteration 10,000). 1098 kill, fire at, seal, invade, enter accuse, criticize, warn, reject, urge accuse, reject, blame, kill, take criticize, call, ask, condemn, denounce USA to Iraq (Vanilla Model) 0.0 0.4 0.8 1995 1996 1997 1998 1999 2000 2001 2002 USA to Iraq (Smoothed Frames) 0.0 0.4 0.8 1995 1996 1997 1998 1999 2000 2001 2002 Figure 2: The USA→Iraq directed dyad, analyzed by smoothed (above) and vanilla (below) models, showing (1) gold-standard MID values (red intervals along top), (2) weeks with non-zero event counts (vertical lines along x-axis), (3) posterior E[θk,USA,IRQ,t] inferences for two frames chosen from two different K = 5 models, and (4) most common verb paths for each frame (right). Frames corresponding to material and verbal conflict were chosen for display. Vertical line indicates Operation Desert Fox (see §5.2). Iraq in this time period. The MID labels are marked in red. The first dispute is a “display of force” (level 3), cataloguing the U.S. response to a series of troop movements along the border with Kuwait. The third dispute (10/7/1997 to 10/10/2001) begins with increasing Iraqi violations of the nofly zone, resulting in U.S. and U.K. retaliation, reaching a high intensity with Operation Desert Fox, a four-day bombing campaign from December 16 to 19, 1998—which is not shown in MID. These cases highlight MID’s limitations—while it is well regarded in the political science literature, its coarse level of aggregation can fail to capture variation in conflict intensity. Figure 2 also shows model inferences. Our smoothed model captures some of these phenomena here, showing clear trends for two relevant frames, including a dramatic change in December 1998. The vanilla model has a harder time, since it cannot combine evidence between different timesteps. The MID dataset overlaps with our data for 470 weeks, from 1993 through 2001. After excluding dyads with actors that the MID data does not intend to include—Kosovo, Tibet, Palestine, and international organizations—we have 267 directed dyads for evaluation, 117 of which have at least one dispute in the MID data. (Dyads with no dispute in the MID data, such as Germany-France, are assumed to have y = 0 throughout the time period.) About 7% of the dyad-time contexts have a dispute under these definitions. We split the dataset by time, training on the first half of the data and testing on the second half, and measure area under the receiver operating characteristic curve (AUC).9 For each model, we train an ℓ1-regularized logistic regression10 with the K elements of θ∗,s,r,t as input features, tuning the regularization parameter within the training set (by splitting it in half again) to optimize held-out likelihood. We weight instances to balance positive and negative examples. Training is on all individual θ samples at once (thus accounting for posterior uncertainty in learning), and final predicted probabilities are averaged from individual probabilities from each θ test set sample, thus propagating posterior uncertainty into the predictions. We also create a baseline ℓ1-regularized logistic regression that uses normalized dependency path counts as the features (10,457 features). For both the baseline and vanilla model, contexts with no events are given a feature vector of all zeros.11 (We also explored an alternative evaluation setup, to hold out by dyad; however, the performance variance is quite high between different random dyad splits.) 5.3 Results Results are shown in Figure 3.12 The verb-path logistic regression performs strongly at AUC 0.62; it outperforms all of the vanilla frame models. This is an example of individual lexical features outperforming a topic model for predictive task, because the topic model’s dimension reduction obscures important indicators from individual words. Similarly, Gerrish and Blei (2011) found that word-based regression outperformed a customized topic model when predicting Congressional bill passage, and Eisen9AUC can be interpreted as follows: given a positive and negative example, what is the probability that the classifier’s confidences order them correctly? Random noise or predicting all the same class both give AUC 0.5. 10Using the R glmnet package (Friedman et al., 2010). 11For the vanilla model, this performed better than linear interpolation (about 0.03 AUC), and with less variance between runs. 12Due to an implementation bug, the model put the vast majority of the probability mass only on K −1 frames, so these settings might be better thought of as K = 1, 2, 3, 4, 9, . . .; see the appendix for details. 1099 G G 0.4 0.5 0.6 0.7 2 3 4 5 10 20 50 100 Number of frames (K) Conflict prediction AUC (higher is better) model G Log. Reg Vanilla Smoothed G G G G G G G G 1.5 2.5 3.5 4.5 5.5 2 3 4 5 10 20 50 100 Number of frames (K) Scale impurity (lower is better) model G Null Vanilla Smoothed Figure 3: Evaluation results. Each point indicates one model run. Lines show the average per K, with vertical lines indicating the 95% bootstrapped interval. Top: Conflict detection AUC for different models (§5.2). Green line is the verb-path logistic regression baseline. Bottom: Lexical scale impurity (§5.1). Top green line indicates the simple random baseline E(|g(wi) −g(wj)|) = 5.33; the second green line is from the random assignment baseline. stein et al. (2010) found word-based regression outperformed Supervised LDA for geolocation,13 and we have noticed this phenomenon for other text-based prediction problems. However, adding smoothing to the model substantially increases performance, and in fact outperforms the verb-path regression at K = 100. It is unclear why the vanilla model fails to increase performance in K. Note also, the vanilla model exhibits very little variability in prediction performance between model runs, in comparison to the smoothed model which is much more variable (presumably due to the higher number of parameters in the model); at small values of K, the smoothed model can perform poorly. It would also be interesting to analyze the smoothed model with higher values of K and find where it peaks. We view the conflict detection task only as one of several validations, and thus turn to lexical evaluation of the induced frames. For lexical scale purity (bottom of Figure 3), the models perform about the same, with the smoothed model a little bit worse at some values of K (though sometimes with better stability of the fits—opposite of the conflict detection task). This suggests that semantic coherence does not benefit from the longer13In the latter, a problem-specific topic model did best. range temporal dependencies. In general, performance improves with higher K, but not beyond K = 50. This suggests the model reaches a limit for how fine-grained of semantics it can learn. 5.4 Case study Here we qualitatively examine the narrative story between the dyad with the highest frequency of events in our dataset, the Israeli-Palestinian relationship, finding qualitative agreement with other case studies of this conflict (Brandt et al., 2012; Goldstein et al., 2001; Schrodt and Gerner, 2004). (The MID dataset does not include this conflict because the Palestinians are not considered a state actor.) Using the Associated Press subset, we plot the highest incidence frames from one run of the K = 20 smoothed frame models, for the two directed dyads, and highlight some of the interesting relationships. Figure 4(a) shows that tradeoffs in the use of military vs. police action by Israel towards the Palestinians tracks with major historical events. The first period in the data where police actions (‘impose, seal, capture, seize, arrest’) exceed military actions (‘kill, fire, enter, attack, raid’) is with the signing of the “Interim Agreement on the West Bank and the Gaza Strip,” also known as the Oslo II agreement. This balance persists until the abrupt breakdown in relations that followed the unsuccessful Camp David Summit in July of 2000, which generally marks the starting point of the wave of violence known as the Second Intifada. In Figure 4(b) we show that our model produces a frame which captures the legal aftermath of particular events (‘accuse, criticize,’ but also ‘detain, release, extradite, charge’). Each of the major spikes in the data coincides with a particular event which either involves the investigation of a particular attack or series of attacks (as in A,B,E) or a discussion about prisoner swaps or mass arrests (as in events D, F, J). Our model also picks up positive diplomatic events, as seen in Figure 4(c), a frame describing Israeli diplomatic actions towards Palestine (‘meet with, sign with, praise, say with, arrive in’). Not only do the spikes coincide with major peace treaties and negotiations, but the model correctly characterizes the relative lack of positively valenced action from the beginning of the Second Intifada until its end around 2005–2006. In Figure 4(d) we show the relevant frames de1100 a. 0.0 0.4 0.8 Israeli Use of Force Tradeoff 1994 1997 2000 2002 2005 2007 Second Intifada Begins Oslo II Signed b. 0.0 0.4 0.8 1.2 Police Actions and Crime Response A B C D E F G H I J 1994 1997 2000 2002 2005 2007 A: Series of Suicide Attacks in Jerusalem B: Island of Peace Massacre C: Arrests over Protests D: Tensions over Treatment of Pal. Prisoners E: Passover Massacre F: 400-Person Prisoner Swap G: Gaza Street Bus Bombing H: Stage Club Bombing I: House to House Sweep for 7 militant leaders J: Major Prisoner Release c. 0.0 0.4 0.8 Israeli−Palestinian Diplomacy A B C D E F 1994 1997 2000 2002 2005 2007 C: U.S. Calls for West Bank Withdrawal D: Deadlines for Wye River Peace Accord E: Negotiations in Mecca F: Annapolis Conference A: Israel-Jordan Peace Treaty B: Hebron Protocol d. 0.0 0.4 0.8 Palestinian Use of Force 1994 1997 2000 2002 2005 2007 Figure 4: For Israel-Palestinian directed dyads, plots of E[θ] (proportion of weekly events in a frame) over time, annotated with historical events. (a): Words are ‘kill, fire at, enter, kill, attack, raid, strike, move, pound, bomb’ and ‘impose, seal, capture, seize, arrest, ease, close, deport, close, release’ (b): ‘accuse, criticize, reject, tell, hand to, warn, ask, detain, release, order’ (c): ‘meet with, sign with, praise, say with, arrive in, host, tell, welcome, join, thank’ (d): again the same ‘kill, fire at’ frame in (a), plus the erroneous frame (see text) ‘include, join, fly to, have relation with, protest to, call, include bomber appos ←−−−informer for’. Figures (b) and (c) use linear interpolation for zero-count weeks (thus relying exclusively on the model for smoothing); (a) and (d) apply a lowess smoother. (a-c) are for the ISR→PSE direction; (d) is PSE→ISR. picting use of force from the Palestinians towards the Israelis (brown trend line). At first, the drop in the use of force frame immediately following the start of the Second Intifada seems inconsistent with the historical record. However, there is a concucrrent rise in a different frame driven by the word ‘include’, which actually appears here due to an NLP error compounded with an artifact of the data source. A casualties report article, containing variants of the text “The Palestinian figure includes... 13 Israeli Arabs...”, is repeated 27 times over two years. “Palestinian figure” is erroneously identified as the PSE entity, and several noun phrases in a list are identified as separate receivers. This issue causes 39 of all 86 PSE→ISR events during this period to use the word ‘include’, accounting for the rise in that frame. (This highlights how better natural language processing could help the model, and the dangers of false positives for this type of data analysis, especially in small-sample drilldowns.) Discounting this erroneous inference, the results are consistent with heightened violence during this period. We conclude the frame extractions for the Israeli-Palestinian case are consistent with the historical record over the period of study. 6 Related Work 6.1 Events Data in Political Science Projects using hand-collected events data represent some of the earliest efforts in the statistical study of international relations, dating back to the 1960s (Rummel, 1968; Azar and Sloan, 1975; McClelland, 1970). Beginning in the mid-1980s, political scientists began experimenting with automated rule-based extraction systems (Schrodt and Gerner, 1994). These efforts culminated in the open-source program, TABARI, which uses pattern matching from extensive hand-developed phrase dictionaries, combined with basic part of speech tagging (Schrodt, 2001); a rough analogue in the information extraction literature might be the rule-based, finite-state FASTUS system for MUC IE (Hobbs et al., 1997), though TABARI is restricted to single sentence analysis. Later proprietary work has apparently incorporated more extensive NLP (e.g., sentence parsing) though few details are available (King and Lowe, 2003). The most recent published work we know of, by Boschee et al. (2013), uses a proprietary parsing and coreference system (BBN SERIF, Ramshaw et al., 2011), and directly compares to TABARI, finding significantly higher accuracy. The origi1101 nal TABARI system is still actively being developed, including just-released work on a new 200 million event dataset, GDELT (Schrodt and Leetaru, 2013).14 All these systems crucially rely on hand-built pattern dictionaries. It is extremely labor intensive to develop these dictionaries. Schrodt (2006) estimates 4,000 trained person-hours were required to create dictionaries of political actors in the Middle East, and the phrase dictionary took dramatically longer; the comments in TABARI’s phrase dictionary indicate some of its 15,789 entries were created as early as 1991. Ideally, any new events data solution would incorporate the extensive work already completed by political scientists in this area while minimizing the need for further dictionary development. In this work we use the actor dictionaries, and hope to incorporate the verb patterns in future work. 6.2 Events in Natural Language Processing Political event extraction from news has also received considerable attention within natural language processing in part due to governmentfunded challenges such as MUC-3 and MUC-4 (Lehnert, 1994), which focused on the extraction of terrorist events, as well as the more recent ACE program. The work in this paper is inspired by unsupervised approaches that seek to discover types of relations and events, instead of assuming them to be pre-specified; this includes research under various headings such as template/frame/event learning (Cheung et al., 2013; Modi et al., 2012; Chambers and Jurafsky, 2011; Li et al., 2010; Bejan, 2008), script learning (Regneri et al., 2010; Chambers and Jurafsky, 2009), relation learning (Yao et al., 2011), open information extraction (Banko et al., 2007; Carlson et al., 2010), verb caseframe learning (Rooth et al., 1999; Gildea, 2002; Grenager and Manning, 2006; Lang and Lapata, 2010; ´O S´eaghdha, 2010; Titov and Klementiev, 2012), and a version of frame learning called “unsupervised semantic parsing” (Titov and Klementiev, 2011; Poon and Domingos, 2009). Unlike much of the previous literature, we do not learn latent roles/slots. Event extraction is also a large literature, including supervised systems targeting problems similar to MUC and political events (Piskorski and Atkinson, 2011; Piskorski et al., 2011; Sanfilippo et al., 2008). One can also see this work as a relational ex14http://eventdata.psu.edu/data.dir/ GDELT.html tension of co-occurence-based methods such as Gerrish (2013; ch. 4), Diesner and Carley (2005), Chang et al. (2009), or Newman et al. (2006), which perform bag-of-words-style analysis of text fragments containing co-occurring entities. (Gerrish also analyzed the international relations domain, using supervised bag-of-words regression to assess the expressed valence between a pair of actors in a news paragraph, using the predictions as observations in a latent temporal model, and compared to MID.) We instead use parsing to get a much more focused and interpretable representation of the relationship between textually cooccurring entities; namely, that they are the source and target of an action event. This is more in line with work in relation extraction on biomedical scientific articles (Friedman et al., 2001; Rzhetsky et al., 2004) which uses parsing to extracting a network of how different entities, like drugs or proteins, interact. 7 Conclusion Large-scale information extraction can dramatically enhance the study of political behavior. Here we present a novel unsupervised approach to an important data collection effort in the social sciences. We see international relations as a rich and practically useful domain for the development of text analysis methods that jointly infer events, relations, and sociopolitical context. There are numerous areas for future work, such as: using verb dictionaries as semi-supervised seeds or priors; interactive learning between political science researchers and unsupervised algorithms; building low-dimensional scaling, or hierarchical structure, into the model; and learning the actor lists to handle changing real-world situations and new domains. In particular, adding more supervision to the model will be crucial to improve semantic quality and make it useful for researchers. Acknowledgments Thanks to Justin Betteridge for providing the parsed Gigaword corpus, Erin Baggott for help in developing the document filter, and the anonymous reviewers for helpful comments. This research was supported in part by NSF grant IIS1211277, and was made possible through the use of computing resources made available by the Pittsburgh Supercomputing Center. Brandon Stewart gratefully acknowledges funding from an NSF Graduate Research Fellowship. References Azar, E. E. and Sloan, T. (1975). Dimensions of interactions. Technical report, University Center of International Studies, University of Pittsburgh, Pittsburgh. 1102 Banko, M., Cafarella, M. J., Soderland, S., Broadhead, M., and Etzioni, O. (2007). Open Information Extraction from the Web. IJCAI. Bejan, C. A. (2008). Unsupervised discovery of event scenarios from texts. In Proceedings of the 21st Florida Artificial Intelligence Research Society International Conference (FLAIRS), Coconut Grove, FL, USA. Blei, D. M. and Lafferty, J. D. (2006). Dynamic topic models. In Proceedings of ICML. Blei, D. M. and Lafferty, J. D. (2007). A correlated topic model of science. Annals of Applied Statistics, 1(1), 17– 35. Boschee, E., Natarajan, P., and Weischedel, R. (2013). Automatic extraction of events from open source text for predictive forecasting. Handbook of Computational Approaches to Counterterrorism, page 51. Brandt, P. T., Freeman, J. R., and Schrodt, P. A. (2011). Real time, time series forecasting of inter-and intra-state political conflict. Conflict Management and Peace Science, 28(1), 41–64. Brandt, P. T., Freeman, J. R., Lin, T.-m., and Schrodt, P. A. (2012). A Bayesian time series approach to the comparison of conflict dynamics. In APSA 2012 Annual Meeting Paper. Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka, E. R., and Mitchell, T. M. (2010). Toward an architecture for never-ending language learning. In Proceedings of the Conference on Artificial Intelligence (AAAI), pages 1306– 1313. Carter, C. K. and Kohn, R. (1994). On Gibbs sampling for state space models. Biometrika, 81(3), 541–553. Chambers, N. and Jurafsky, D. (2009). Unsupervised learning of narrative schemas and their participants. In Proceedings of ACL-IJCNLP. Association for Computational Linguistics. Chambers, N. and Jurafsky, D. (2011). Template-based information extraction without the templates. In Proceedings of ACL. Chang, J., Boyd-Graber, J., and Blei, D. M. (2009). Connections between the lines: augmenting social networks with text. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 169–178. ACM. Cheung, J. C. K., Poon, H., and Vanderwende, L. (2013). Probabilistic frame induction. In Proceedings of NAACL. arXiv preprint arXiv:1302.4813. de Marneffe, M.-C. and Manning, C. D. (2008). Stanford typed dependencies manual. Technical report, Stanford University. Diesner, J. and Carley, K. M. (2005). Revealing social structure from texts: meta-matrix text analysis as a novel method for network text analysis. In Causal mapping for information systems and technology research, pages 81– 108. Harrisburg, PA: Idea Group Publishing. Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1277—1287. Eisenstein, J., Ahmed, A., and Xing, E. (2011). Sparse additive generative models of text. In Proceedings of ICML, pages 1041–1048. Friedman, C., Kra, P., Yu, H., Krauthammer, M., and Rzhetsky, A. (2001). GENIES: a natural-language processing system for the extraction of molecular pathways from journal articles. Bioinformatics, 17(suppl 1), S74–S82. Friedman, J., Hastie, T., and Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1). Gerner, D. J., Schrodt, P. A., Yilmaz, O., and Abu-Jabr, R. (2002). The Creation of CAMEO (Conflict and Mediation Event Observations): An Event Data Framework for a Post Cold War World. Annual Meeting of the American Political Science Association. Gerrish, S. M. (2013). Applications of Latent Variable Models in Modeling Influence and Decision Making. Ph.D. thesis, Princeton University. Gerrish, S. M. and Blei, D. M. (2011). Predicting legislative roll calls from text. In Proceedings of ICML. Ghosn, F., Palmer, G., and Bremer, S. A. (2004). The MID3 data set, 1993–2001: Procedures, coding rules, and description. Conflict Management and Peace Science, 21(2), 133–154. Gildea, D. (2002). Probabilistic models of verb-argument structure. In Proceedings of COLING. Goldstein, J. S. (1992). A conflict-cooperation scale for WEIS events data. Journal of Conflict Resolution, 36, 369–385. Goldstein, J. S., Pevehouse, J. C., Gerner, D. J., and Telhami, S. (2001). Reciprocity, triangularity, and cooperation in the middle east, 1979-97. Journal of Conflict Resolution, 45(5), 594–620. Grenager, T. and Manning, C. D. (2006). Unsupervised discovery of a statistical verb lexicon. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, page 18. Griffiths, T. L. and Steyvers, M. (2004). Finding scientific topics. PNAS, 101(suppl. 1), 5228–5235. Harrison, J. and West, M. (1997). Bayesian forecasting and dynamic models. Springer Verlag, New York. Hobbs, J. R., Appelt, D., Bear, J., Israel, D., Kameyama, M., Stickel, M., and Tyson, M. (1997). FASTUS: A cascaded finite-state transducer for extracting information from natural-language text. Finite-State Language Processing, page 383. Hoff, P. D. (2003). Nonparametric modeling of hierarchically exchangeable data. University of Washington Statistics Department, Technical Report, 421. Holmes, C. C. and Held, L. (2006). Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis, 1(1), 145–168. Jones, D., Bremer, S., and Singer, J. (1996). Militarized interstate disputes, 1816–1992: Rationale, coding rules, and empirical patterns. Conflict Management and Peace Science, 15(2), 163–213. King, G. and Lowe, W. (2003). An automated information extraction tool for international conflict data with performance as good as human coders: A rare events evaluation design. International Organization, 57(3), 617–642. Lang, J. and Lapata, M. (2010). Unsupervised induction of semantic roles. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 939–947. Association for Computational Linguistics. Lehnert, W. G. (1994). Cognition, computers, and car bombs: How Yale prepared me for the 1990s. In Beliefs, Reasoning, and Decision-Making. Psycho-Logic in Honor of Bob Abelson, pages 143–173, Hillsdale, NJ, Hove, UK. Erlbaum. http://ciir.cs.umass.edu/pubfiles/ cognition3.pdf. Li, H., Li, X., Ji, H., and Marton, Y. (2010). Domainindependent novel event discovery and semi-automatic 1103 event annotation. In Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation, Sendai, Japan, November. Martin, A. D. and Quinn, K. M. (2002). Dynamic ideal point estimation via Markov chain Monte Carlo for the U.S. Supreme Court, 1953–1999. Political Analysis, 10(2), 134–153. McClelland, C. (1970). Some effects on theory from the international event analysis movement. Mimeo, University of Southern California. Mimno, D., Wallach, H., and McCallum, A. (2008). Gibbs sampling for logistic normal topic models with graphbased priors. In NIPS Workshop on Analyzing Graphs. Modi, A., Titov, I., and Klementiev, A. (2012). Unsupervised induction of frame-semantic representations. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 1–7. Association for Computational Linguistics. Murphy, K. P. (2012). Machine Learning: a Probabilistic Perspective. MIT Press. Neal, R. M. (2003). Slice sampling. Annals of Statistics, pages 705–741. Newman, D., Chemudugunta, C., and Smyth, P. (2006). Statistical entity-topic models. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 680–686. ACM. ´O S´eaghdha, D. (2010). Latent variable models of selectional preference. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 435–444. Association for Computational Linguistics. O’Brien, S. P. (2010). Crisis early warning and decision support: Contemporary approaches and thoughts on future research. International Studies Review, 12(1), 87–104. Parker, R., Graff, D., Kong, J., Chen, K., and Maeda, K. (2009). English Gigaword Fourth Edition. Linguistic Data Consortium. LDC2009T13. Piskorski, J. and Atkinson, M. (2011). Frontex real-time news event extraction framework. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 749–752. ACM. Piskorski, J., Tanev, H., Atkinson, M., van der Goot, E., and Zavarella, V. (2011). Online news event extraction for global crisis surveillance. Transactions on computational collective intelligence V, pages 182–212. Polson, N. G., Scott, J. G., and Windle, J. (2012). Bayesian inference for logistic models using Polya-Gamma latent variables. arXiv preprint arXiv:1205.0310. Poon, H. and Domingos, P. (2009). Unsupervised semantic parsing. In Proceedings of EMNLP, pages 1–10. Association for Computational Linguistics. Quinn, K. M., Monroe, B. L., Colaresi, M., Crespin, M. H., and Radev, D. R. (2010). How to analyze political attention with minimal assumptions and costs. American Journal of Political Science, 54(1), 209228. Rajaraman, A. and Ullman, J. D. (2011). Mining of massive datasets. Cambridge University Press; http:// infolab.stanford.edu/˜ullman/mmds.html. Ramshaw, L., Boschee, E., Freedman, M., MacBride, J., Weischedel, R., , and Zamanian, A. (2011). SERIF language processing effective trainable language understanding. Handbook of Natural Language Processing and Machine Translation, pages 636–644. Regneri, M., Koller, A., and Pinkal, M. (2010). Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 979–988. Rooth, M., Riezler, S., Prescher, D., Carroll, G., and Beil, F. (1999). Inducing a semantically annotated lexicon via EM-based clustering. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, page 104111. Rummel, R. (1968). The Dimensionality of Nations project. Rzhetsky, A., Iossifov, I., Koike, T., Krauthammer, M., Kra, P., Morris, M., Yu, H., Dubou´e, P. A., Weng, W., Wilbur, W. J., Hatzivassiloglou, V., and Friedman, C. (2004). GeneWays: a system for extracting, analyzing, visualizing, and integrating molecular pathway data. Journal of Biomedical Informatics, 37(1), 43–53. Sandhaus, E. (2008). The New York Times Annotated Corpus. Linguistic Data Consortium. LDC2008T19. Sanfilippo, A., Franklin, L., Tratz, S., Danielson, G., Mileson, N., Riensche, R., and McGrath, L. (2008). Automating frame analysis. Social computing, behavioral modeling, and prediction, pages 239–248. Schrodt, P. (2012). Precedents, progress, and prospects in political event data. International Interactions, 38(4), 546– 569. Schrodt, P. and Leetaru, K. (2013). GDELT: Global data on events, location and tone, 1979-2012. In International Studies Association Conference. Schrodt, P. A. (2001). Automated coding of international event data using sparse parsing techniques. International Studies Association Conference. Schrodt, P. A. (2006). Twenty Years of the Kansas Event Data System Project. Political Methodologist. Schrodt, P. A. and Gerner, D. J. (1994). Validity assessment of a machine-coded event data set for the Middle East, 1982-1992. American Journal of Political Science. Schrodt, P. A. and Gerner, D. J. (2004). An event data analysis of third-party mediation in the middle east and balkans. Journal of Conflict Resolution, 48(3), 310–330. Shellman, S. M. (2004). Time series intervals and statistical inference: The effects of temporal aggregation on event data analysis. Political Analysis, 12(1), 97–104. Titov, I. and Klementiev, A. (2011). A Bayesian model for unsupervised semantic parsing. In Proceedings of ACL. Titov, I. and Klementiev, A. (2012). A Bayesian approach to unsupervised semantic role induction. Proceedings of EACL. Wallach, H., Mimno, D., and McCallum, A. (2009). Rethinking lda: Why priors matter. Advances in Neural Information Processing Systems, 22, 1973–1981. Yao, L., Haghighi, A., Riedel, S., and McCallum, A. (2011). Structured relation discovery using generative models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1456–1466. Association for Computational Linguistics. Zeger, S. L. and Karim, M. R. (1991). Generalized linear models with random effects; a Gibbs sampling approach. Journal of the American Statistical Association, 86(413), 79–86. 1104
2013
108
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1105–1115, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Graph Propagation for Paraphrasing Out-of-Vocabulary Words in Statistical Machine Translation∗ Majid Razmara1 Maryam Siahbani1 Gholamreza Haffari2 Anoop Sarkar1 1 Simon Fraser University, Burnaby, BC, Canada {razmara,msiahban,anoop}@sfu.ca 2 Monash University, Clayton, VIC, Australia [email protected] Abstract Out-of-vocabulary (oov) words or phrases still remain a challenge in statistical machine translation especially when a limited amount of parallel text is available for training or when there is a domain shift from training data to test data. In this paper, we propose a novel approach to finding translations for oov words. We induce a lexicon by constructing a graph on source language monolingual text and employ a graph propagation technique in order to find translations for all the source language phrases. Our method differs from previous approaches by adopting a graph propagation approach that takes into account not only one-step (from oov directly to a source language phrase that has a translation) but multi-step paraphrases from oov source language words to other source language phrases and eventually to target language translations. Experimental results show that our graph propagation method significantly improves performance over two strong baselines under intrinsic and extrinsic evaluation metrics. 1 Introduction Out-of-vocabulary (oov) words or phrases still remain a challenge in statistical machine translation. SMT systems usually copy unknown words verbatim to the target language output. Although this is helpful in translating a small fraction of oovs such as named entities for languages with same writing systems, it harms the translation in other types of oovs and distant language pairs. In general, copied-over oovs are a hindrance to fluent, high quality translation, and we can see evidence of this in automatic measures such as BLEU (Papineni et al., 2002) and also in human evaluation scores such as HTER. The problem becomes more severe when only a limited amount of parallel text is available for training or when the training and test data are from different domains. Even noisy translation of oovs can aid the language model to better ∗This research was partially supported by an NSERC, Canada (RGPIN: 264905) grant. The third author was supported by an early career research award from Monash University to visit Simon Fraser University. re-order the words in the target language (Zhang et al., 2012). Increasing the size of the parallel data can reduce the number of oovs. However, there will always be some words or phrases that are new to the system and finding ways to translate such words or phrases will be beneficial to the system. Researchers have applied a number of approaches to tackle this problem. Some approaches use pivot languages (Callison-Burch et al., 2006) while others use lexicon-induction-based approaches from source language monolingual corpora (Koehn and Knight, 2002; Garera et al., 2009; Marton et al., 2009). Pivot language techniques tackle this problem by taking advantage of available parallel data between the source language and a third language. Using a pivot language, oovs are translated into a third language and back into the source language and thereby paraphrases to those oov words are extracted (Callison-Burch et al., 2006). For each oov, the system can be augmented by aggregating the translations of all its paraphrases and assign them to the oov. However, these methods require parallel corpora between the source language and one or multiple pivot languages. Another line of work exploits spelling and morphological variants of oov words. Habash (2008) presents techniques for online handling of oov words for Arabic to English such as spelling expansion and morphological expansion. Huang et al. (2011) proposes a method to combine sublexical/constituent translations of an oov word or phrase to generate its translations. Several researchers have applied lexiconinduction methods to create a bilingual lexicon for those oovs. Marton et al. (2009) use a monolingual text on the source side to find paraphrases to oov words for which the translations are available. The translations for these paraphrases are 1105 then used as the translations of the oov word. These methods are based on the distributional hypothesis which states that words appearing in the same contexts tend to have similar meaning (Harris, 1954). Marton et al. (2009) showed that this method improves over the baseline system where oovs are untranslated. We propose a graph propagation-based extension to the approach of Marton et al. (2009) in which a graph is constructed from source language monolingual text1 and the source-side of the available parallel data. Nodes that have related meanings are connected together and nodes for which we have translations in the phrase-table are annotated with target-side translations and their feature values. A graph propagation algorithm is then used to propagate translations from labeled nodes to unlabeled nodes (phrases appearing only in the monolingual text and oovs). This provides a general purpose approach to handle several types of oovs, including morphological variants, spelling variants and synonyms2. Constructing such a huge graph and propagating messages through it pose severe computational challenges. Throughout the paper, we will see how these challenges are dealt with using scalable algorithms. 2 Collocational Lexicon Induction Rapp (1995) introduced the notion of a distributional profile in bilingual lexicon induction from monolingual data. A distributional profile (DP) of a word or phrase type is a co-occurrence vector created by combining all co-occurrence vectors of the tokens of that phrase type. Each distributional profile can be seen as a point in a |V |-dimensional space where V is the vocabulary where each word type represents a unique axis. Points (i.e. phrase types) that are close to one another in this highdimensional space can represent paraphrases. This approach has also been used in machine translation to find in-vocabulary paraphrases for oov words on the source side and find a way to translate them. 2.1 Baseline System Marton et al. (2009) was the first to successfully integrate a collocational approach to finding trans1Here on by monolingual data we always mean monolingual data on the source language 2Named entity oovs may be handled properly by copying or transliteration. lations for oov words into an end-to-end SMT system. We explain their method in detail as we will compare against this approach. The method relies on monolingual distributional profiles (DPs) which are numerical vectors representing the context around each word. The goal is to find words or phrases that appear in similar contexts as the oovs. For each oov a distributional profile is created by collecting all words appearing in a fixed distance from all occurrences of the oov word in the monolingual text. These co-occurrence counts are converted to an association measure (Section 2.2) that encodes the relatedness of each pair of words or phrases. Then, the most similar phrases to each oov are found by measuring the similarity of their DPs to that of the oov word. Marton et al. (2009) uses a heuristic to prune the search space for finding candidate paraphrases by keeping the surrounding context (e.g. L R) of each occurrences of the oov word. All phrases that appear in any of such contexts are collected as candidate paraphrases. For each of these paraphrases, a DP is constructed and compared to that of the oov word using a similarity measure (Section 2.2). The top-k paraphrases that have translations in the phrase-table are used to assign translations and scores to each oov word by marginalizing translations over paraphrases: p(t|o) = X s p(t|s)p(s|o) where t is a phrase on the target side, o is the oov word or phrase, and s is a paraphrase of o. p(s|o) is estimated using a similarity measure over DPs and p(t|s) is coming from the phrase-table. We reimplemented this collocational approach for finding translations for oovs and used it as a baseline system. Alternative ways of modeling and comparing distributional profiles have been proposed (Rapp, 1999; Fung and Yee, 1998; Terra and Clarke, 2003; Garera et al., 2009; Marton et al., 2009). We review some of them here and compare their performance in Section 4.3. 2.2 Association Measures Given a word u, its distributional profile DP(u) is constructed by counting surrounding words (in a fixed window size) in a monolingual corpus. DP(u) = {⟨A(u, wi)⟩| wi ∈V } 1106 The counts can be collected in positional3 (Rapp, 1999) or non-positional way (count all the word occurrences within the sliding window). A(·, ·) is an association measure and can simply be defined as co-occurrence counts within sliding windows. Stronger association measures can also be used such as: Conditional probability: the probability for the occurrence of each word in DP given the occurrence of u: CP(u, wi) = P(wi|u) (Sch¨utze and Pedersen, 1997) Pointwise Mutual Information: this measure is a transformation of the independence assumption into a ratio. Positive values indicate that words co-occur more than what we expect under the independence assumption (Lin, 1998): PMI(u, wi) = log2 P(u, wi) P(u)P(wi) Likelihood ratio: (Dunning, 1993) uses the likelihood ratio for word similarity: λ(u, wi) = L(P(wi|u); p) ∗L(P(wi|¬u); p) L(P(wi|u); p1) ∗L(P(wi|¬u); p2) where L is likelihood function under the assumption that word counts in text have binomial distributions. The numerator represents the likelihood of the hypothesis that u and wi are independent (P(wi|u) = P(wi|¬u) = p) and the denominator represents the likelihood of the hypothesis that u and wi are dependent (P(wi|u) ̸= P(wi|¬u) , P(wi|u) = p1, P(wi|¬u) = p2 )4. Chi-square test: is a statistical hypothesis testing method to evaluate independence of two categorical random variables, e.g. whether the occurrence of u and wi (denoted by x and y respectively) are independent. The test statistics χ2(u, wi) is the deviation of the observed counts fx,y from their expected values Ex,y: χ2(u, wi) := X x∈{wi,¬wi} X y∈{u,¬u} (fx,y −Ex,y)2 Ex,y 2.3 Similarity Measures Various functions have been used to estimate the similarity between distributional profiles. 3e.g., position 1 is the word immediately after, position -1 is the word immediately before etc. 4Binomial distribution B(k; n, θ) gives the probability of observing k heads in n tosses of a coin where the coin parameter is θ. In our context, p, p1 and p2 are parameters of Binomial distributions estimated using maximum likelihood. Given two distributional profiles DP(u) and DP(v), some similarity functions can be defined as follows. Note that A(·, ·) stands for the various association measures defined in Sec. 2.2. Cosine coefficient is the cosine the angle between two vectors DP(u) and DP(v): cos(DP(u), DP(v)) = P wi∈V A(u, wi)A(v, wi) qP wi∈V A(u, wi)2qP wi∈V A(v, wi)2 L1-Norm computes the accumulated distance between entries of two distributional profiles (L1(·, ·)). It has been used as word similarity measure in language modeling (Dagan et al., 1999). L1(DP(u), DP(v)) = X wi∈V |A(u, wi)−A(v, wi)| Jensen-Shannon Divergence is a symmetric version of contextual average mutual information (KL) which is used by (Dagan et al., 1999) as word similarity measure. JSD(DP(u), DP(v)) =KL(DP(u), AV GDP (u, v))+ KL(DP(v), AV GDP (u, v)) AV GDP (u, v) = A(u, wi) + A(v, wi) 2 | wi ∈V  KL(DP(u), DP(v)) = X wi∈V A(u, wi)log A(u, wi) A(v, wi) 3 Graph-based Lexicon Induction We propose a novel approach to alleviate the oov problem. Given a (possibly small amount of) parallel data between the source and target languages, and a large monolingual data in the source language, we construct a graph over all phrase types in the monolingual text and the source side of the parallel corpus and connect phrases that have similar meanings (i.e. appear in similar context) to one another. To do so, the distributional profiles of all source phrase types are created. Each phrase type represents a vertex in the graph and is connected to other vertices with a weight defined by a similarity measure between the two profiles (Section 2.3). There are three types of vertices in the graph: i) labeled nodes which appear in the parallel corpus and for which we have the target-side 1107 translations5; ii) oov nodes from the dev/test set for which we seek labels (translations); and iii) unlabeled nodes (words or phrases) from the monolingual data which appear usually between oov nodes and labeled nodes. When a relatively small parallel data is used, unlabeled nodes outnumber labeled ones and many of them lie on the paths between an oov node to labeled ones. Marton et al. (2009)’s approach ignores these bridging nodes and connects each oov node to the k-nearest labeled nodes. One may argue that these unlabeled nodes do not play a major role in the graph and the labels will eventually get to the oov nodes from the labeled nodes by directly connecting them. However based on the definition of the similarity measures using context, it is quite possible that an oov node and a labeled node which are connected to the same unlabeled node do not share any context words and hence are not directly connected. For instance, consider three nodes, u (unlabeled), o (oov) and l (labeled) where u has the same left context words with o but share the right context with l. o and l are not connected since they do not share any context word. Once a graph is constructed based on similarities of phrases, graph propagation is used to propagate the labels from labeled nodes to unlabeled and oov nodes. The approach is based on the smoothness assumption (Chapelle et al., 2006) which states if two nodes are similar according to the graph, then their output labels should also be similar. The baseline approach (Marton et al., 2009) can be formulated as a bipartite graph with two types of nodes: labeled nodes (L) and oov nodes (O). Each oov node is connected to a number of labeled nodes, and vice versa and there is no edge between nodes of the same type. In such a graph, the similarity of each pair of nodes is computed using one of the similarity measures discussed above. The labels are translations and their probabilities (more specifically p(e|f)) from the phrase-table extracted from the parallel corpus. Translations get propagated to oov nodes using a label propagation technique. However beside the difference in the oov label assignment, there is a major difference between our bipartite graph and the baseline (Marton et al., 2009): we do not use a heuristic to 5It is possible that a phrase appears in the parallel corpus, but not in the phrase-table. This happens when the wordalignment module is not able to align the phrase to a target side word or words. reduce the number of neighbor candidates and we consider all possible candidates that share at least one context word. This makes a significant difference in practice as shown in Section 4.3.1. We also take advantage of unlabeled nodes to help connect oov nodes to labeled ones. The discussed bipartite graph can easily be expanded to a tripartite graph by adding unlabeled nodes. Figure 1 illustrate a tripartite graph in which unlabeled nodes are connected to both labeled and oov nodes. Again, there is no edge between nodes of the same type. We also created the full graph where all nodes can be freely connected to nodes of any type including the same type. However, constructing such graph and doing graph propagation on it is computationally very expensive for large n-grams. 3.1 Label Propagation Let G = (V, E, W) be a graph where V is the set of vertices, E is the set of edges, and W is the edge weight matrix. The vertex set V consists of labeled VL and unlabeled VU nodes, and the goal of the labeling propagation algorithm is to compute soft labels for unlabeled vertices from the labeled vertices. Intuitively, the edge weight W(u, v) encodes the degree of our belief about the similarity of the soft labeling for nodes u and v. A soft label ˆYv ∈∆m+1 is a probability vector in (m + 1)dimensional simplex, where m is the number of possible labels and the additional dimension accounts for the undefined ⊥label6. In this paper, we make use of the modified Adsorption (MAD) algorithm (Talukdar and Crammer, 2009) which finds soft label vectors ˆYv to solve the following unconstrained optimization problem: min ˆY µ1 X v∈VL p1,v||Yv −ˆYv||2 2 + (1) µ2 X v,u p2,vWv,u|| ˆYv −ˆYu||2 2 + (2) µ3 X v p3,v|| ˆYv −Rv||2 2 (3) where µi and pi,v are hyper-parameters (∀v : P i pi,v = 1)7, and Rv ∈∆m+1 encodes our prior belief about the labeling of a node v. The first 6Capturing those cases where the given data is not enough to reliably compute a soft labeling using the initial m real labels. 7The values of these hyper-parameters are set to their defaults in the Junto toolkit (Talukdar and Crammer, 2009). 1108 o1 o2 o3 l1 l2 l3 u1 u2 u3 u4 u5 t11 : p11 t12 : p12 t13 : p13 t21 : p21 t22 : p22 t23 : p23 t31 : p31 t32 : p32 t33 : p33 O : oov nodes L : labeled nodes U : unlabeled nodes sim(o1, l1) Figure 1: A tripartite graph between oov, labeled and unlabeled nodes. Translations propagate either directly from labeled nodes to oov nodes or indirectly via unlabeled nodes. term (1) enforces the labeling of the algorithm to match the seed labeling Yv with different extent for different labeled nodes. The second term (2) enforces the smoothness of the labeling according to the graph structure and edge weights. The last term (3) regularizes the soft labeling for a vertex v to match a priori label Rv, e.g. for high-degree unlabeled nodes (hubs in the graph) we may believe that the neighbors are not going to produce reliable label and hence the probability of undefined label ⊥should be higher. The optimization problem can be solved with an efficient iterative algorithm which is parallelized in a MapReduce framework (Talukdar et al., 2008; Rao and Yarowsky, 2009). We used the Junto label propagation toolkit (Talukdar and Crammer, 2009) for label propagation. 3.2 Efficient Graph Construction Graph-based approaches can easily become computationally very expensive as the number of nodes grow. In our case, we use phrases in the monolingual text as graph vertices. These phrases are n-grams up to a certain value, which can result in millions of nodes. For each node a distributional profile (DP) needs to be created. The number of possible edges can easily explode in size as there can be as many as O(n2) edges where n is the number of nodes. A common practice to control the number of edges is to connect each node to at most k other nodes (k-nearest neighbor). However, finding the top-k nearest nodes to each node requires considering its similarity to all the other nodes which requires O(n2) computations and since n is usually very large, doing such is practically intractable. Therefore, researchers usually resort to an approximate k-NN algorithms such as locality-sensitive hashing (?; Goyal et al., 2012). Fortunately, since we use context words as cues for relating their meaning and since the similarity measures are defined based on these cues, the number of neighbors we need to consider for each node is reduced by several orders of magnitude. We incorporate an inverted-index-style data structure which indicates what nodes are neighbors based on each context word. Therefore, the set of neighbors of a node consists of union of all the neighbors bridged by each context word in the DP of the node. However, the number of neighbors to be considered for each node even after this drastic reduction is still large (in order of a few thousands). In order to deal with the computational challenges of such a large graph, we take advantage of the Hadoop’s MapReduce functionality to do both graph construction and label propagation steps. 4 Experiments & Results 4.1 Experimental Setup We experimented with two different domains for the bilingual data: Europarl corpus (v7) (Koehn, 1109 Dataset Domain Sents Tokens Fr En Bitext Europarl 10K 298K 268K EMEA 1M 16M 14M Monotext Europarl 2M 60M – Dev-set WMT05 2K 67K 58K Test-set WMT05 2K 66K 58K Table 1: Statistics of training sets in different domains. 2005), and European Medicines Agency documents (EMEA) (Tiedemann, 2009) from French to English. For the monolingual data, we used French side of the Europarl corpus and we used ACL/WMT 20058 data for dev/test sets. Table 1 summarizes statistics of the datasets used. From the dev and test sets, we extract all source words that do not appear in the phrase-table constructed from the parallel data. From the oovs, we exclude numbers as well as named entities. We apply a simple heuristic to detect named entities: basically words that are capitalized in the original dev/test set that do not appear at the beginning of a sentence are named entities. Table 2 shows the number of oov types and tokens for Europarl and EMEA systems in both dev and test sets. Dataset Dev Test types tokens types tokens Europarl 1893 2229 1830 2163 EMEA 2325 4317 2294 4190 Table 2: number of oovs in dev and test sets for Europarl and EMEA systems. For the end-to-end MT pipeline, we used Moses (Koehn et al., 2007) with these standard features: relative-frequency and lexical translation model (TM) probabilities in both directions; distortion model; language model (LM) and word count. Word alignment is done using GIZA++ (Och and Ney, 2003). We used distortion limit of 6 and max-phrase-length of 10 in all the experiments. For the language model, we used the KenLM toolkit (Heafield, 2011) to create a 5-gram language model on the target side of the Europarl corpus (v7) with approximately 54M tokens with Kneser-Ney smoothing. 4.1.1 Phrase-table Integration Once the translations and their probabilities for each oov are extracted, they are added to the 8http://www.statmt.org/wpt05/mt-shared-task/ phrase-table that is induced from the parallel text. The probability for new entries are added as a new feature in the log-linear framework to be tuned along with other features. The value of this newly introduced feature for original entries in the phrase-table is set to 1. Similarly, the value of original four probability features in the phrasetable for the new entries are set to 1. The entire training pipeline is as follows: (i) a phrase table is constructed using parallel data as usual, (ii) oovs for dev and test sets are extracted, (iii) oovs are translated using graph propagation, (iv) oovs and translations are added to the phrase table, introducing a new feature type, (v) the new phrase table is tuned (with a LM) using MERT (Och, 2003) on the dev set. 4.2 Evaluation If we have a list of possible translations for oovs with their probabilities, we become able to evaluate different methods we discussed. We wordaligned the dev/test sets by concatenating them to a large parallel corpus and running GIZA++ on the whole set. The resulting word alignments are used to extract the translations for each oov. The correctness of this gold standard is limited to the size of the parallel data used as well as the quality of the word alignment software toolkit, and is not 100% precise. However, it gives a good estimate of how each oov should be translated without the need for human judgments. For evaluating our baseline as well as graphbased approaches, we use both intrinsic and extrinsic evaluations. Two intrinsic evaluation metrics that we use to evaluate the possible translations for oovs are Mean Reciprocal Rank (MRR) (Voorhees, 1999) and Recall. Intrinsic evaluation metrics are faster to apply and are used to optimize different hyper-parameters of the approach (e.g. window size, phrase length, etc.). Once we come up with the optimized values for the hyper-parameters, we extrinsically evaluate different approaches by adding the new translations to the phrase-table and run it through the MT pipeline. 4.2.1 MRR MRR is an Information Retrieval metric used to evaluate any process that produces a ranked list of possible candidates. The reciprocal rank of a list is the inverse of the rank of the correct answer in the list. Such score is averaged over a set, oov set 1110 in our case, to get the mean-reciprocal-rank score. MRR = 1 |O| |O| X i=1 1 ranki O = {oov} In a few cases, there are multiple translations for an oov word (i.e. appearing more than once in the parallel corpus and being assigned to multiple different phrases), we take the average of reciprocal ranks for each of them. 4.2.2 Recall MRR takes the probabilities of oov translations into account in sorting the list of candidate translations. However, in an MT pipeline, the language model is supposed to rerank the hypotheses and move more appropriate translations (in terms of fluency) to the top of the list. Hence, we also evaluate our candidate translation regardless of the ranks. Since Moses uses a certain number of translations per source phrase (called the translation table limit or ttl which we set to 20 in our experiments) , we use the recall measure to evaluate the top ttl translations in the list. Recall is another Information Retrieval measure that is the fraction of correct answers that are retrieved. For example, it assigns score of 1 if the correct translation of the oov word is in the top-k list and 0 otherwise. The scores are averaged over all oovs to compute recall. Recall = |{gold standard} ∩{candidate list}| |{gold standard}| 4.3 Intrinsic Results In Section 2.2 and 2.3, different types of association measures and similarity measures have been explained to build and compare distributional profiles. Table 3 shows the results on Europarl when using different similarity combinations. The measures are evaluated by fixing the window size to 4 and maximum candidate paraphrase length to 2 (e.g. bigram). First column shows the association measures used to build DPs. As the results show, the combination of PMI as association measure and cosine as DP similarity measure outperforms the other possible combinations. We use these two measures throughout the rest of the experiments. Figure 2 illustrates the effects of different window sizes and paraphrase lengths on MRR. As the figure shows, the best MRR is reached when using window size of 4 and trigram nodes. Going from trigram to 4-gram results in a drop in MRR. One Assoc cosine(%) L1norm(%) JSD(%) MRR RCL MRR RCL MRR RCL CP 1.66 4.16 2.18 5.55 2.33 6.32 LLR 1.79 4.26 0.13 0.37 0.5 1.00 PMI 3.91 7.75 0.50 1.17 0.59 1.21 Chi 1.66 4.16 0.26 0.55 0.03 0.05 Table 3: Results of intrinsic evaluations (MRR and Recall) on Europarl, window size 4 and paraphrase length 2 3.5   3.7   3.9   4.1   4.3   2   3   4   5   6   7   MRR  (%)   Window  Size   unigram   bigram   trigram   quadgram   Figure 2: Effects of different window sizes and paraphrase length on the MRR of the dev set. reason would be that distributional profiles for 4grams are very sparse and that negatively affects the stability of similarity measures. Figure 3 illustrates the effect of increasing the size of monolingual text on both MRR and recall. 1× refers to the case of using 125k sentences for the monolingual text and the 16× indicates using the whole Europarl text on the source side (≈2M sentences). As shown, there is a linear correlation between the logarithm of the data size and the MRR and recall ratios. Interestingly, MRR is growing faster than recall by increasing the monolingual text size, which means that the scoring function gets better when more data is available. The figure also indicates that a much bigger monolingual text data can be used to further improve the quality of the translations, however, at the expense of more computational resources. MRR  Ra%o   Recall  Ra%o   0 1 2 3 4 5 0 1x 2x 4x 8x 16x Mono-text Size Ratio Figure 3: Effect of increasing the monolingual text size on MRR and Recall. 1111 Graph Neighbor MRR % RCL % Bipartite 20 5.2 12.5 Tripartite 15+5 5.9 12.6 Full 20 5.1 10.9 Baseline 20 3.7 7.2 Table 4: Intrinsic results of different types of graphs when using unigram nodes on Europarl. Type Node MRR % RCL % Bipartite unigram 5.2 12.5 bigram 6.8 15.7 Tripartite unigram 5.9 12.6 bigram 6.9 15.9 Baseline bigram 3.9 7.7 Table 5: Results on using unigram or bigram nodes. 4.3.1 Graph-based Results Table 4 shows the intrinsic results on the Europarl corpus when using unigram nodes in each of the graphs. The results are evaluated on the dev-set based on the gold alignment created using GIZA++. Each node is connected to at most 20 other nodes (same as the max-paraphrase-limit in the baseline). For the tripartite graph, each node is connected to 15 labeled nodes and 5 unlabeled ones. The tripartite graph gets a slight improvement over the bipartite one, however, the full graph failed to have the same increase. One reason is that allowing paths longer than 2 between oov and labeled nodes causes more noise to propagate into the graph. In other words, a paraphrase of a paraphrase of a paraphrase is not necessarily a useful paraphrase for an oov as the translation may no longer be a valid one. Table 5 also shows the effect of using bigrams instead of unigrams as graph nodes. There is an improvement by going from unigrams to bigrams in both bipartite and tripartite graphs. We did not use trigrams or larger n-grams in our experiments. 4.4 Extrinsic Results The generated candidate translations for the oovs can be added to the phrase-table created using the parallel corpus to increase the coverage of the phrase-table. This aggregated phrase-table is to be tuned along with the language model on the dev set, and run on the test set. BLEU (Papineni et al., 2002) is still the de facto evaluation metric for machine translation and we use that to measure the quality of our proposed approaches for MT. In these experiments, we do not use alignment information on dev or test sets unlike the previous section. Table 6 reports the Bleu scores for different domains when the oov translations from the graph propagation is added to the phrase-table and compares them with the baseline system (i.e. Moses). Results for our approach is based on unigram tripartite graphs and show that we improve over the baseline in both the same-domain (Europarl) and domain adaptation (EMEA) settings. Table 7 shows some translations found by our system for oov words. oov gold standard candiate list sp´ecialement undone particularly especially special particular particularly specific only particular should and especially assentiment approval support agreement approval accession will approve endorses Table 7: Two examples of oov translations found by our method. 5 Related work There has been a long line of research on learning translation pairs from non-parallel corpora (Rapp, 1995; Koehn and Knight, 2002; Haghighi et al., 2008; Garera et al., 2009; Marton et al., 2009; Laws et al., 2010). Most have focused on extracting a translation lexicon by mining monolingual resources of data to find clues, using probabilistic methods to map words, or by exploiting the cross-language evidence of closely related languages. Most of them evaluated only highfrequency words of specific types (nouns or content words) (Rapp, 1995; Koehn and Knight, 2002; Haghighi et al., 2008; Garera et al., 2009; Laws et al., 2010) In contrast, we do not consider any constraint on our test data and our data includes many low frequency words. It has been shown that translation of high-frequency words is easier than low frequency words (Tamura et al., 2012). Some methods have used a third language(s) as pivot or bridge to find translation pairs (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002; Callison-Burch et al., 2006). 1112 Corpus System MRR Recall Dev Bleu Test Bleu Europarl Baseline – – 28.53 28.97 Our approach 5.9 12.6 28.76 29.40* EMEA Baseline – – 20.05 20.34 Our approach 3.6 7.4 20.54 20.80* * Statistically significant with p < 0.02 using the bootstrap resampling significance test (in Moses). Table 6: Bleu scores for different domains with or without using oov translations. Context similarity has been used effectively in bilingual lexicon induction (Rapp, 1995; Koehn and Knight, 2002; Haghighi et al., 2008; Garera et al., 2009; Marton et al., 2009; Laws et al., 2010). It has been modeled in different ways: in terms of adjacent words (Rapp, 1999; Fung and Yee, 1998), or dependency relations (Garera et al., 2009). Laws et al. (2010) used linguistic analysis in the form of graph-based models instead of a vector space. But all of these researches used an available seed lexicon as the basic source of similarity between source and target languages unlike our method which just needs a monolingual corpus of source language which is freely available for many languages and a small bilingual corpora. Some methods tried to alleviate the lack of seed lexicon by using orthographic similarity to extract a seed lexicon (Koehn and Knight, 2002; Fiser and Ljubesic, 2011). But it is not a practical solution in case of unrelated languages. Haghighi et al. (2008) and Daum´e and Jagarlamudi (2011) proposed generative models based on canonical correlation analysis to extract translation lexicons for non-parallel corpora by learning a matching between source and target lexicons. Using monolingual features to represent words, feature vectors are projected from source and target words into a canonical space to find the appropriate matching between them. Their method relies on context features which need a seed lexicon and orthographic features which only works for phylogenetically related languages. Graph-based semi-supervised methods have been shown to be useful for domain adaptation in MT as well. Alexandrescu and Kirchhoff (2009) applied a graph-based method to determine similarities between sentences and use these similarities to promote similar translations for similar sentences. They used a graph-based semi-supervised model to re-rank the n-best translation hypothesis. Liu et al. (2012) extended Alexandrescu’s model to use translation consensus among similar sentences in bilingual training data by developing a new structured label propagation method. They derived some features to use during decoding process that has been shown useful in improving translation quality. Our graph propagation method connects monolingual source phrases with oovs to obtain translation and so is a very different use of graph propagation from these previous works. Recently label propagation has been used for lexicon induction (Tamura et al., 2012). They used a graph based on context similarity as well as cooccurrence graph in propagation process. Similar to our approach they used unlabeled nodes in label propagation process. However, they use a seed lexicon to define labels and comparable corpora to construct graphs unlike our approach. 6 Conclusion We presented a novel approach for inducing oov translations from a monolingual corpus on the source side and a parallel data using graph propagation. Our results showed improvement over the baselines both in intrinsic evaluations and on BLEU. Future work includes studying the effect of size of parallel corpus on the induced oov translations. Increasing the size of parallel corpus on one hand reduces the number of oovs. But, on the other hand, there will be more labeled paraphrases that increases the chance of finding the correct translation for oovs in the test set. Currently, we find paraphrases for oov words. However, oovs can be considered as n-grams (phrases) instead of unigrams. In this scenario, we also can look for paraphrases and translations for phrases containing oovs and add them to the phrase-table as new translations along with the translations for unigram oovs. We also plan to explore different graph propagation objective functions. Regularizing these objective functions appropriately might let us scale to much larger data sets with an order of magnitude more nodes in the graph. 1113 References Andrei Alexandrescu and Katrin Kirchhoff. 2009. Graph-based learning for statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 119–127, Stroudsburg, PA, USA. Association for Computational Linguistics. C. Callison-Burch, P. Koehn, and M. Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 17–24. Association for Computational Linguistics. O. Chapelle, B. Sch¨olkopf, and A. Zien, editors. 2006. Semi-Supervised Learning. MIT Press, Cambridge, MA. Ido Dagan, Lillian Lee, and Fernando C. N. Pereira. 1999. Similarity-based models of word cooccurrence probabilities. Mach. Learn., 34(1-3):43–69, February. Hal Daum´e, III and Jagadeesh Jagarlamudi. 2011. Domain adaptation for machine translation by mining unseen words. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, HLT ’11, pages 407–412, Stroudsburg, PA, USA. Association for Computational Linguistics. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Comput. Linguist., 19(1):61–74, March. Darja Fiser and Nikola Ljubesic. 2011. Bilingual lexicon extraction from comparable corpora for closely related languages. In RANLP, pages 125–131. Pascale Fung and Lo Yuen Yee. 1998. An ir approach for translating new words from nonparallel, comparable texts. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 1, ACL ’98, pages 414– 420. Association for Computational Linguistics. Nikesh Garera, Chris Callison-Burch, and David Yarowsky. 2009. Improving translation lexicon induction from monolingual corpora via dependency contexts and part-of-speech equivalences. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL ’09, pages 129–137, Stroudsburg, PA, USA. Association for Computational Linguistics. Amit Goyal, Hal Daume III, and Raul Guerra. 2012. Fast Large-Scale Approximate Graph Construction for NLP. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’12. Nizar Habash. 2008. Four techniques for online handling of out-of-vocabulary words in arabic-english statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 57–60. Association for Computational Linguistics. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In ACL, pages 771–779. Zellig Harris. 1954. Distributional structure. Word, 10(23):146–162. Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187–197. Chung-Chi Huang, Ho-Ching Yen, Ping-Che Yang, Shih-Ting Huang, and Jason S Chang. 2011. Using sublexical translations to handle the oov problem in machine translation. ACM Transactions on Asian Language Information Processing (TALIP), 10(3):16. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition - Volume 9, ULA ’02, pages 9–16, Stroudsburg, PA, USA. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180, Stroudsburg, PA, USA. ACL. P. Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5. Florian Laws, Lukas Michelbacher, Beate Dorow, Christian Scheible, Ulrich Heid, and Hinrich Sch¨utze. 2010. A linguistically grounded graph model for bilingual lexicon extraction. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING ’10, pages 614–622, Stroudsburg, PA, USA. Association for Computational Linguistics. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 2, ACL ’98, pages 768–774, Stroudsburg, PA, USA. Association for Computational Linguistics. 1114 Shujie Liu, Chi-Ho Li, Mu Li, and Ming Zhou. 2012. Learning translation consensus with structured label propagation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 302–310, Stroudsburg, PA, USA. Association for Computational Linguistics. Gideon S. Mann and David Yarowsky. 2001. Multipath translation lexicon induction via bridge languages. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL ’01, pages 1–8, Stroudsburg, PA, USA. Yuval Marton, Chris Callison-Burch, and Philip Resnik. 2009. Improved statistical machine translation using monolingually-derived paraphrases. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1, EMNLP ’09, pages 381–390, Stroudsburg, PA, USA. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist., 29(1):19–51, March. Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of the 41th Annual Meeting of the ACL, Sapporo, July. ACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Delip Rao and David Yarowsky. 2009. Ranking and semi-supervised classification on large scale graphs using map-reduce. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing, TextGraphs-4. Association for Computational Linguistics. Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL ’95, pages 320–322. Association for Computational Linguistics. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, ACL ’99, pages 519– 526. Association for Computational Linguistics. Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In proceedings of the 6th conference on Natural language learning - Volume 20, COLING-02, pages 1–7, Stroudsburg, PA, USA. Association for Computational Linguistics. Hinrich Sch¨utze and Jan O. Pedersen. 1997. A cooccurrence-based thesaurus and two applications to information retrieval. Inf. Process. Manage., 33(3):307–318, May. Partha Pratim Talukdar and Koby Crammer. 2009. New Regularized Algorithms for Transductive Learning. In European Conference on Machine Learning (ECML-PKDD). Partha Pratim Talukdar, Joseph Reisinger, Marius Pas¸ca, Deepak Ravichandran, Rahul Bhagat, and Fernando Pereira. 2008. Weakly-supervised acquisition of labeled class instances using graph random walks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08. Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2012. Bilingual lexicon extraction from comparable corpora using label propagation. In EMNLPCoNLL, pages 24–36. Egidio L. Terra and Charles L. A. Clarke. 2003. Frequency estimates for statistical word similarity measures. In HLT-NAACL. Jorg Tiedemann. 2009. News from opus - a collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia. Ellen M. Voorhees. 1999. TREC-8 Question Answering Track Report. In Proceedings of the 8th Text Retrieval Conference, pages 77–82. Jiajun Zhang, Feifei Zhai, and Chengqing Zong. 2012. Handling unknown words in statistical machine translation from a new perspective. In Natural Language Processing and Chinese Computing, pages 176–187. Springer. 1115
2013
109
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 104–113, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Recognizing Rare Social Phenomena in Conversation: Empowerment Detection in Support Group Chatrooms Elijah Mayfield, David Adamson, and Carolyn Penstein Ros´e Language Technologies Institute Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213 {emayfiel, dadamson, cprose}@cs.cmu.edu Abstract Automated annotation of social behavior in conversation is necessary for large-scale analysis of real-world conversational data. Important behavioral categories, though, are often sparse and often appear only in specific subsections of a conversation. This makes supervised machine learning difficult, through a combination of noisy features and unbalanced class distributions. We propose within-instance content selection, using cue features to selectively suppress sections of text and biasing the remaining representation towards minority classes. We show the effectiveness of this technique in automated annotation of empowerment language in online support group chatrooms. Our technique is significantly more accurate than multiple baselines, especially when prioritizing high precision. 1 Introduction Quantitative social science research has experienced a recent expansion, out of controlled settings and into natural environments. With this influx of interest comes new methodology, and the inevitable question arises of how to move towards testable hypotheses, using these uncontrolled sources of data as scientific lenses into the real world. The study of conversational transcripts is a key domain in this new frontier. There are certain social and behavioral phenomena in conversation that cannot be easily identified through questionnaire data, self-reported surveys, or easily extracted user metadata. Examples of these social phenomena in conversation include overt displays of power (Prabhakaran et al., 2012) or indicators of rapport and relationship building (Wang et al., 2012). Manually annotating these social phenomena cannot scale to large data, so researchers turn to automated annotation of transcripts (Ros´e et al., 2008). While machine learning is highly effective for annotation tasks with relatively balanced labels, such as sentiment analysis (Pang and Lee, 2004), more complex social functions are often rarer. This leads to unbalanced class label distributions and a much more difficult machine learning task. Moreover, features indicative of rare social annotations tend to be drowned out in favor of features biased towards the majority class. The net effect is that classification algorithms tend to bias towards the majority class, giving low accuracy for rare class detection. Automated annotation of social phenomena also brings opportunities for real-world applications. For example, real-time annotation of conversation can power adaptive intervention in collaborative learning settings (Rummel et al., 2008; Adamson and Ros´e, 2012). However, with the considerable power of automation comes great responsibility. It is critical to avoid intervening in the case of erroneous annotations, as providing unnecessary or inappropriate support in such a setting has been shown to be harmful to group performance and social cohesion (Dillenbourg, 2002; Stahl, 2012). We propose adaptations to existing machine learning algorithms which improve recognition of rare annotations in conversational text data. Our primary contribution comes in the form of withininstance content selection. We develop a novel algorithm based on textual cues, suppressing information which is likely to be irrelevant to an instance’s class label. This allows features which predict minority classes to gain prominence, helping to sidestep the frequency of common features pointing to a majority class label. Additionally, we propose modifications to existing algorithms. First, we identify a new application of logistic model trees to text data. Next, 104 we define a modification of confidence-based ensemble voting which encourages minority class labeling. Using these techniques, we demonstrate a significant improvement in classifier performance when recognizing the language of empowerment in support group chatrooms, a critical application area for researchers studying conversational interactions in healthcare (Uden-Kraan et al., 2009). The remainder of this paper is structured as follows. We introduce the domain of empowerment in support contexts, along with previous studies on the challenges that these annotations (and similar others) bring to machine learning. We introduce our new technique for improving the ability to automate this annotation, along with other optimizations to the machine learning workflow which are tailored to this skewed class balance. We present experimental results showing that our method is effective, and provide a detailed analysis of the behavior of our model and the features it uses most. We conclude with a discussion of particularly useful applications of this work. 2 Background We ground this paper’s discussion of machine learning with a real problem, turning to the annotation of empowerment language in chat1. The concept of empowerment, while a prolific area of research, lacks a broad definition across professionals, but broadly relates to “the power to act efficaciously to bring about desired results” (Boehm and Staples, 2002) and “experiencing personal growth as a result of developing skills and abilities along with a more positive self-definition” (Staples, 1990). Participants in online support groups feel increased empowerment (Uden-Kraan et al., 2009; Barak et al., 2008). Quantitative studies have shown the effect of empowerment through statistical methods such as structural equation modeling (Vauth et al., 2007), as have qualitative methods such as deductive transcript analysis (Owen et al., 2008) and interview studies (Wahlin et al., 2006). The transition between these styles of research has been gradual. Pioneering work has demonstrated the ability to distinguish empowerment language in written texts, including prompted writing samples (Pennebaker and Seagal, 1999), nar1Definitions of empowerment are closely related to the notion of self-efficacy (Bandura, 1997). For simplicity, we use the former term exclusively in this paper. Table 1: Empowerment label distribution in our corpus. Annotation Label # % Self-Empowerment NA 1522 79.3 POS 202 10.5 NEG 196 10.2 Other-Empowerment NA 1560 81.3 POS 217 11.3 NEG 143 7.4 ratives in online forums (Hoybye et al., 2005), and some preliminary analysis of synchronous discussion (Ogura et al., 2008; Mayfield et al., 2012b). These transitional works have used limited analysis methodology; in the absence of sophisticated natural language processing, their conclusions often rely on coarse measures, such as word counts and proportions of annotations in a text. Users, of course, do not express empowerment in every thread in which they participate, which leads to a challenge for machine learning. Threads often focus on a single user’s experiences, in which most participants in a chat are merely commentators, if they participate at all, matching previous research on shifts in speaker salience over time (Hassan et al., 2008). This leads to many user threads which are annotated as not applicable (N/A). We move to our proposed approach with these skewed distributions in mind. 3 Data Our data consists of a set of chatroom conversation transcripts from the Cancer Support Community2. Each 90-minute conversation took place in the context of a weekly meeting in a real-time chat, with up to 6 participants in addition to a professional therapist facilitating the discussion. In total, 2,206 conversations were collected from 20072011. This data offers potentially rich insight into coping and social support; however, annotating such a dataset by hand would be prohibitively expensive, even when it is already transcribed. Twenty-one of these conversations have been annotated, as originally described and analyzed in (Mayfield et al., 2012b)3. This data was disentangled into threads based on common themes or topics, as in prior work (Elsner and Charniak, 2www.cancersupportcommunity.org 3All annotations were found to be adequately reliable between humans, with thread disentanglement f = 0.75 and empowerment annotation κ > 0.7. 105 Figure 1: An example mapping from a single thread’s chat lines (left) to the per-user, per-thread instances used for classification in this paper (right), with example annotations for self-empowerment indicated. 2010; Adams and Martel, 2010). A novel peruser, per-thread annotation was then employed for empowerment annotation, following a coding manual based on definitions like those in Section 2. Each user was assigned a label of positive or negative empowerment if they exhibited such emotions, or was left blank if they did not do so within the context of that thread. This annotation was performed both for their self-empowerment as well as their attitude towards others’ situations (other-empowerment). An example of this annotation for self-empowerment is presented in Figure 1 and the distribution of labels is given in Table 1. Most previous annotation tasks attempt to annotate on a per-utterance basis, such as dialogue act tagging (Popescu-Belis, 2008), or on arbitrary spans of text, such as in the MPQA subjectivity corpus (Wiebe et al., 2005). However, for our task, a per-user, per-thread annotation is more appropriate, because empowerment is often indicated best through narrative (Hoybye et al., 2005). Human annotators are instructed to take this context into account when annotating (Mayfield et al., 2012b). It would therefore be nonsensical to annotate individual lines as “embodying” empowerment. Similar arguments have been made for sentiment, especially as the field moves towards aspect-oriented sentiment (Breck et al., 2007). Assigning labels based on thread boundaries allows for context to be meaningfully taken into account, without crossing topic boundaries. However, this granularity comes with a price: the distribution of class values in these instances is highly skewed. In our data, the vast majority of users’ threads are marked as not applicable to empowerment. Perhaps more inconveniently, while taking context into account is important for reliable annotation, it leads to extraneous information in many cases. Many threads can have multiple lines of contributions that are topically related to an expression of empowerment (and thus belong in the same thread), but which do not indicate any empowerment themselves. This exacerbates the likelihood of instances being classified as N/A. We choose to take advantage of these attributes of threads. We know from research in discourse analysis that many sections of conversations are formulaic and rote, like introductions and greetings (Schegloff, 1968). We additionally know that polarity often shifts in dialogue through the use of discourse connectives such as conjunctions and transitional phrases. These issues have been addressed in work in the language technologies community, most notably through the Penn Discourse Treebank (Prasad et al., 2008); however, their applications to noisier synchronous conversation has beenrare in computational linguistics. With these linguistic insights in mind, we examine how we can make best use of them for machine learning performance. While techniques for predicting rare events (Weiss and Hirsh, 1998) and compensating for class imbalance (Frank and 106 Bouckaert, 2006), these approaches generally focus on statistical properties of large class sets without taking the nature of their datasets into account. In the next section, we propose a new algorithm which takes advantage specifically of the linguistic phenomena in the conversation-based data that we study for empowerment detection. As such, our algorithm is highly suited to this data and task, with the necessary tradeoff in uncertain generality to new domains with unrelated data. 4 Cue Discovery for Content Selection Our algorithm performs content selection by learning a set of cue features. Each of these features indicates some linguistic function within the discourse which should downplay the importance of features either before or after that discourse marker. Our algorithm allows us to evaluate the impact of rules against a baseline, and to iteratively judge each rule atop the changes made by previous rules. This algorithm fits into existing language technologies research which has attempted to partition documents into sections which are more or less relevant for classification. Many researchers have attempted to make use of cue phrases (Hirschberg and Litman, 1993), especially for segmentation both in prose (Hearst, 1997) and conversation (Galley et al., 2003). The approach of content selection, meanwhile, has been explored for sentiment analysis (Pang and Lee, 2004), where individual sentences may be less subjective and therefore less relevant to the sentiment classification task. It is also similar conceptually to content selection algorithms that have been used for text summarization (Teufel and Moens, 2002) and text generation (Sauper and Barzilay, 2009), both of which rely on finding highly-relevant passages within source texts. Our work is distinct from these approaches. While we have coarse-grained annotations of empowerment, there is no direct annotation of what makes a good cue for content selection. With our cues, we hope to take advantage of shallow discourse structure in conversation, such as contrastive markers, making use of implicit structure in the conversational domain. 4.1 Notation Before describing extensions to the baseline logistic regression model, we define notation. Our data is arranged hierarchically. We assume that we have a collection of d training documents Tr = {D1 . . . Dd}, each of which contains many training instances (in our task, an instance consists of all lines of chat from one user in one thread). Our total set of n instances I thus consists of instances {I1, I2, . . . In}. Each document contains lines of chat L and each instance Ii is comprised of some subset of those lines, Li ⊆L. Our feature space X = {x1, x2, . . . xm} consists of m unigram features representing the observed vocabulary used in our corpus. Each instance is associated with a feature vector ¯x containing values for each x ∈X, and each feature x that is present in the i-th instance maintains a “memory” of the lines in which it appeared in that instance, Lix, where Lix ⊆Li. Our potential output labels consist of Y = {NA, NEG, POS}, though this generalizes to any nominal classification task. Each instance I is associated with exactly one y ∈Y for self-empowerment and one for other-empowerment; these two labels do not interact and our tasks are treated as independent in this paper4. We define classifiers as functions f(¯x →y ∈Y); in practice, we use logistic regression via LibLINEAR (Fan et al., 2008). We define a content selection rule as a pairing r = ⟨c, t⟩between a cue feature c ∈X and a selection function t ∈T. We created a list of possible selection functions, given a cue c, maximizing for generality while being expressive. These are illustrated in Figure 2 and described below: • Ignore Local Future (A): Ignore all features from the two lines after each occurrence of c. • Ignore All Future (B): Ignore all features occurring after the first occurrence of c. • Ignore Local History (C): Ignore all features in the two lines preceding each occurrence of c. • Ignore All History (D): Ignore all features occurring only before the last occurrence of c. We define an ensemble member E = ⟨R, fR⟩the ordered list of learned content selection rules R = [r1, r2, . . . ] and a classifier fC trained on instances transformed by those rules. Our final out4Future work may examine the interaction of jointly annotating multiple sparse social phenomena. 107 Figure 2: Effects of content selection rules, based on a cue feature (ovals) observed at lines m and n. put of a trained model is a set of ensemble members {E1, . . . , Ek}. 4.2 Algorithm Our ensemble learning follows the paradigm of cross-validated committees (Parmanto et al., 1996), where k ensemble members are trained by subdividing our training data into k subfolds. For each ensemble classifier, cue rules R are generated on k −1 subfolds (Trk) and evaluated on the remaining subfold (Tek). In practice, with 21 training documents, 7-fold cross-validation, and k = 3 ensemble members, each generation set consists of 12 documents’ instances, while each evaluation set contains instances from 6 documents. Our full algorithm is presented in Algorithm 1, and is broken into component parts for clarity. Algorithm 2 begins by measuring the baseline classifier’s ability to recognize minority-class labels. After training on Trk, we measure the average probability assigned to the correct label of instances in Tek, but only for instances whose correct labels are minority classes (remember, because both Trk and Tek are drawn from the overall Tr, we have access to true class labels). We choose this subset of only minority instances, as we are not interested in optimizing to the majority class. We next enumerate all rules that we wish to judge. To keep this problem tractable, we ignore features which do not occur in at least 5% of training instances. For the remaining features, we create a candidate rule for each possible pairing of features and selection functions. For each of these candidates, we test its utility by selecting content as if it were an actual rule, then building a new classifier (trained on the generation set) using instances that have been altered in that way. In the evaluation set, we measure the difference in probability of minority class labels being assigned correctly between the baseline and this altered space. This measure of an individual rule’s impact is described in Algorithm 3. Once we have evaluated every possible rule once, we select the top-ranked rule and apply it to the feature set. We then iteratively progress through our now-ranked list of candidates, each time treating the newly filtered dataset as our new baseline. We search only top candidates for efficiency, following the fixed-width search methodology for feature selection in very high-dimensionality feature spaces (G¨utlein et al., 2009). Each ensemble classifier is finally retrained on all training data, after applying the corresponding content selection rules to that data. 5 Prediction Our prediction algorithm begins with a standard implementation of cross-validated committees (Parmanto et al., 1996), whose results are aggregated with a confidence voting method intended to favor rare labels (Erp et al., 2002). Cross-validated committees are an ensemble technique used to subsample training data to produce multiple hypotheses for classification. Each classifier produced by our cue-based transformation is trained on a subset of our training data. Each makes predictions on all test set instances, producing a distribution of confidence across possible labels. These values serve as inputs to a voting method to produce a final label for each instance. Compared to other ensemble methods, crossvalidated committees as described above are a good fit for our task, because of its unique unit of analysis. As thread-level analysis is the set of individual participants’ turns in a conversation, we risk overfitting if we sample from the same conversations for the training and testing sets. In contrast to standard bagging, hard sampling boundaries never train and test on instances drawn from the same conversation. To aggregate the votes from members of this ensemble into a final prediction, we employ a variant on Selfridge’s Pandemonium (Selfridge, 1958). If a minority label is selected as the highestconfidence value in any classifier in our ensemble, it is selected. The majority label, by contrast, is only selected if it is the most likely prediction by all classifiers in our ensemble. Thus consensus is required to elect the majority class, and the strongest minority candidate is elected otherwise. 108 In : generation set Trk, evaluation set Tek Out: ensemble committee {E1 . . . Ek} for i = 1 to k do Rfinal ←[ ]; Xfreq ←{x ∈X | freq(x) ∈Trk > 5%}; R ←Xfreq × T; R∗←R; repeat Pbase ←EvaluateClassifier(Trk, Tek); EvaluateRules(Pbase, Trk, Tek, R∗); Trk, Tek ←ApplyRule(R∗[0]); R ←R −R∗[0]; ∆←score(R∗[0]); Rfinal ←Rfinal + R∗[0]; R∗←R[0 . . . 50]; until ∆< threshold; Trfinal ←Trk ∪Tek; foreach r ∈Rfinal do Trfinal ←ApplyRule(Trfinal, r); end Train f(¯x →y) on Trfinal; end Algorithm 1: LearnSelectionCues() This approach is designed to bias the prediction of our machine learning algorithms in favor of minority classes in a coherent manner. If there is a plausible model that has been trained which recognizes the possibility of a rare label, it is used; the prediction only reverts to the majority class when no plausible minority label could be chosen. As validation of this technique, we compare our “minority pandemonium” approach against both typical pandemonium and standard sum-rule confidence voting (Erp et al., 2002). 5.1 Logistic Model Stumps One characteristic of highly skewed data is that, while minority labels may be expressed in a number of different surface forms, there are many obvious cases in which they do not apply. These cases can actually be harmful to classification of borderline cases. Features that could be given high weight in marginal cases may be undervalued in “low-hanging fruit” easy cases. To remove those obvious instances, a very simple screening heuristic is often enough to eliminate frequent phenotypes of instances where the rare annotation is not present. Prior work has sometimes screened training data through obvious heuristic rules, espeIn : generation set Trk, evaluation set Tek Out: minority class probability average Pbase Train f(¯x →y) on Trk; Temin k ←{Instance I ∈Tek | yI ̸= “NA”} ; Pbase ←0 ; foreach Instance I ∈Temin k do Pbase ←Pbase + P(f(¯xI) = yI) end Pbase = Pbase/size(Temin k ) Algorithm 2: EvaluateClassifier() In : Trk, Tek, rules R, base probability Pbase Out: R sorted on each rule’s improvement score foreach Rule r ∈R do Tr′ k, Te′ k ←ApplyRule(Trk, Tek, r); Palter ←EvaluateClassifier(Tr′ k, Te′ k); score(r) ←Palter −Pbase; end Sort R on score(r) from high to low; Algorithm 3: EvaluateRules() cially in speech recognition; for instance, training speech recognition for words followed by a pause separately from words followed by another word (Franco et al., 2010), or training separate models based on gender (Jiang et al., 1999). We achieve this instance screening by learning logistic model tree stumps (Landwehr et al., 2005), which allow us to quickly partition data if there is a particularly easy heuristic that can be learned to eliminate a large number of majorityclass labels. One challenge of this approach is our underlying unigram feature space - tree-based algorithms are generally poor classifiers for the high-dimensionality, low-information features in a lexical feature space (Han et al., 2001). To compensate, we employ a smaller, denser set of binary features for tree stump screening: instance length thresholds and LIWC category membership. First, we define a set of features that split based on the number of lines an instance contains, from 1 to 10 (only a tiny fraction of instances are more than 10 lines long). For example, a feature splitting on instances with lines ≤2 would be true for one- and two-line instances, and false for all others. Second, we define a feature for each category in the Linguistic Inquiry and Word Count dictionary (Tausczik and Pennebaker, 2010) - these broad classes of words allow for more balanced 109 Figure 3: Precision/recall curves for algorithms. After 50% recall all models converge and there are no significant differences in performance. splits than would unigrams alone. Each category’s feature is true if any word in that category was used at least once in that instance. We exhaustively sweep this feature space, and report the most successful stump rules for each annotation task. In our other experiments, we report results with and without the best rule for this preprocessing step; we also measure its impact alone. 6 Experimental Results All experiments were performed using LightSIDE (Mayfield and Ros´e, 2013). We use a binary unigram feature space, and we perform 7-fold crossvalidation. Instances from the same chat transcript never occur in both train and testing folds. Furthermore, we assume that threads have been disentangled already, and our experiments use gold standard thread structure. While this is not a trivial assumption, prior work has shown thread disentanglement to be manageable (Mayfield et al., 2012a); we consider it an acceptable simplifying assumption for our experiments. We compare our methods against baselines including a majority baseline, a baseline logistic regression classifier with L2 regularized features, and two common ensemble methods, AdaBoost (Freund and Schapire, 1996) and bagging (Breiman, 1996) with logistic regression base classifiers5. Table 2 presents the best-performing result from each classification method. For selfempowerment recognition, all methods that we introduce are significant improvements in κ, the 5These methods usually use weak, unstable base classifiers; however, in our experiments, those performed poorly. Table 2: Performance for baselines, common ensemble algorithms, and proposed methods. Statistically significant improvements over baseline are marked (p < .01, †; p < .05, *; p < 0.1, +). Self Other Method % κ % κ Majority 79.3 .000 81.3 .000 LR Baseline 81.0 .367 81.0 .270 LR + Boosting 78.1 .325 78.5 .275 LR + Bagging 81.2 .352 81.9 .265 LR + Committee 81.0 .367 81.0 .270 Learned Stumps 81.8* .385† 81.7 .293+ Content Selection 80.9 .389† 80.7 .282 Stumps+Selection 81.3 .406† 79.4 .254 Table 3: Performance of content-selection wrapped learners, for minority voting and two baseline voting methods. Self Other Method % κ % κ Pandemonium 80.3 .283 81.4 .239 Averaged 80.6 .304 81.6 .251 Minority Voting 80.9† .389† 80.7 .282 measurement of agreement over chance, compared to all baselines. While accuracy remains stable, this is due to predictions shifting away from the majority class and towards minority classes. Our combined model using both logistic model tree stumps and content selection is significantly better than either alone (p < .01). To compare the minority pandemonium voting method against baselines of simple pandemonium and summed confidence voting, Table 3 presents the results of content selection wrappers with each voting method. Minority voting is more effective compared to standard confidence voting, improving κ while modestly reducing accuracy; this is typical of a shift towards minority class predictions. 7 Discussion These results show promise for our techniques, which are able to distinguish features of rare labels, previously awash in a sea of irrelevance. Figure 3 shows the impact of our rules as we tune to different levels of recall, with a large boost in precision when recall is not important; our model converges with the baseline for high-recall, lowprecision tuning. This suggests that our method is particularly suitable for tasks where confident la110 Table 4: Cue rules commonly selected by the algorithm. Average improvement over the LR baseline is also shown. Self-Empowerment Cue Transformation ∆% and,but Ignore Local Future +5.0 have Ignore All History +4.3 ! Ignore All History +4.2 me,my Ignore All History +3.4 Other-Empowerment Cue Transformation ∆% and,but Ignore Local Future +5.5 you Ignore Local History +5.2 ’s Ignore Local History +4.1 that Ignore Local History +3.9 beling of a few instances is more important than labeling as many instances as possible. This is common when tasks have a high cost or carry high risk (for instance, providing real-time conversational supports with an agent, where inappropriate intervention could be disruptive). Other low-recall applications include exploration large corpora for exemplar instances, where the most confident predictions for a given label should be presented first for analyst use. In the rest of this section, we examine notable within-instance and per-instance rules selected by our methods. These rules are summarized in Tables 4 and 5. For both self- and other-empowerment, we find pronoun rules that match the task (first-person and second-person pronouns for self-Empowerment and other-Empowerment respectively). In both tasks, we find cue rules that suppress the context preceding personal pronouns. These, as well as the possessive suffix ’s, echo the per-instance effect of the Self and You splits, anticipating that what follows such a personal reference is likely to bear an evaluation of empowerment. Exclamation marks may indicate strong emotion - we find many instances where what precedes a line with an exclamation is more objective, and what follows includes an assessment. Conjunctions but and and are selected as cue rules suppressing the two lines that follow the occurrence - suggesting, as suspected, that connective discourse markers play a role in indicating empowerment (Fraser, 1999). The best-performing stump splits for the SelfEmpowerment annotation are Line Length ≤1 and the LIWC word-categories Article, Swear, and Table 5: Best decision rules for logistic model stumps. Significant improvement (p < 0.05) indicated with *. Self-Empowerment Split Rule κ ∆κ % ∆% Split ≤1 * 0.385 +.018 81.8 +0.8 LIWC-Article 0.379 +.012 81.6 +0.6 LIWC-Swear * 0.376 +.009 81.4 +0.4 LIWC-Self * 0.376 +.009 81.5 +0.5 Other-Empowerment Split Rule κ ∆κ % ∆% LIWC-You 0.293 +.023 81.7 +0.7 LIWC-Eating * 0.283 +.013 81.6 +0.6 LIWC-Negate * 0.282 +.012 82.3 +1.3 LIWC-Present 0.281 +.011 81.6 +0.6 Self. The split on line length corresponds to the observation that longer instances provide greater opportunity for personal narrative self-assessment to occur (95% of single-line instances are labeled NA). The Article category may serve as a proxy for content length - article-less instances in our corpus include one-line social greetings and exchanges of contact information. Swear words may be a cue for awareness of self-empowerment - a recent study of women coping with illness reported that swearing in the presence of others, but not alone, was related to potentially harmful outcomes (Robbins et al., 2011). Among other- oriented split rules, Eating stands out as non-obvious, although medical literature has suggested a link between dietary behavior and empowerment attitudes in a study of women with cancer (Pinto et al., 2002). 8 Conclusion We have demonstrated an algorithm for improving automated classification accuracy on highly skewed tasks for conversational data. This algorithm, particularly its focus on content selection, is rooted in the structural format of our data, which can generalize to many tasks involving conversational data. Our experiments show that this model significantly improves machine learning performance. Our algorithm is taking advantage of structural facets of discourse markers, lending basic sociolinguistic validity to its behavior. Though we have treated each of these rarely-occurring labels as independent thus far, in practice we know that this is not the case. Joint prediction of labels through structured modeling is an obvious next 111 step for improving classification accuracy. This is an important step towards large-scale analysis of the impact of support groups on patients and caregivers. Our method can be used to confidently highlight occurrences of rare labels in large data sets. This has real-world implications for professional intervention in social conversational domains, especially in scenarios where such an intervention is likely to be associated with a high cost or high risk. With the construction of more accurate classifiers, we open the possibility of automating annotation on large conversational datasets, enabling new directions for researchers with domain expertise. Acknowledgments The research reported here was supported by National Science Foundation grant IIS-0968485. References Paige Adams and Craig Martel. 2010. Conversational thread extraction and topic detection in text-based chat. In Semantic Computing. David Adamson and Carolyn Penstein Ros´e. 2012. Coordinating multi-dimensional support in collaborative conversational agents. In Proceedings of Intelligent Tutoring Systems. Albert Bandura. 1997. Self-Efficacy: The Exercise of Control. Azy Barak, Meyran Boniel-Nissim, and John Suler. 2008. Fostering empowerment in online support groups. Computers in Human Behavior. A Boehm and L H Staples. 2002. The functions of the social worker in empowering: The voices of consumers and professionals. Social Work. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of IJCAI. Leo Breiman. 1996. Bagging predictors. Machine Learning. Pierre Dillenbourg. 2002. Over-scripting cscl: The risks of blending collaborative learning with instructional design. Three worlds of CSCL. Can we support CSCL? Micha Elsner and Eugene Charniak. 2010. Disentangling chat. Computational Linguistics. Merijn Van Erp, Louis Vuurpijl, and Lambert Schomaker. 2002. An overview and comparison of voting methods for pattern recognition. In Frontiers in Handwriting Recognition. IEEE. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Horacio Franco, Harry Bratt, Romain Rossier, Venkata Rao Gadde, Elizabeth Shriberg, Victor Abrash, and Kristin Precoda. 2010. Eduspeak: A speech recognition and pronunciation scoring toolkit for computer-aided language learning applications. Language Testing. Eibe Frank and Remco R Bouckaert. 2006. Naive bayes for text classification with unbalanced classes. Knowledge Discovery in Databases. Bruce Fraser. 1999. What are discourse markers? Journal of pragmatics, 31(7):931–952. Yoav Freund and Robert E Schapire. 1996. Experiments with a new boosting algorithm. In Proceedings of ICML. Michel Galley, Kathleen McKeown, Eric FoslerLussier, and Hongyan Jing. 2003. Discourse segmentation of multi-party conversation. In Proceedings of ACL. Martin G¨utlein, Eibe Frank, Mark Hall, and Andreas Karwath. 2009. Large-scale attribute selection using wrappers. In Proceedings of IEEE CIDM. Eui-Hong Han, George Karypis, and Vipin Kumar. 2001. Text categorization using weight adjusted k-nearest neighbor classification. Lecture Notes in Computer Science: Advances in Knowledge Discovery and Data Mining. Ahmed Hassan, Anthony Fader, Michael H Crespin, Kevin M Quinn, Burt L Monroe, Michael Colaresi, and Dragomir R Radev. 2008. Tracking the dynamic evolution of participant salience in a discussion. In Proceedings of Coling. Marti A Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics. Julia Hirschberg and Diane Litman. 1993. Empirical studies on the disambiguation of cue phrases. Computational Linguistics. Mette Terp Hoybye, Christoffer Johansen, and Tine Tjornhoj-Thomsen. 2005. Online interaction effects of storytelling in an internet breast cancer support group. Psycho-oncology. Hui Jiang, Keikichi Hirose, and Qiang Huo. 1999. Robust speech recognition based on a bayesian prediction approach. In IEEE Transactions on Speech and Audio Processing. Niels Landwehr, Mark Hall, and Eibe Frank. 2005. Logistic model trees. Machine Learning. Elijah Mayfield and Carolyn Penstein Ros´e. 2013. Lightside: Open source machine learning for text. In Handbook of Automated Essay Evaluation: Current Applications and New Directions. 112 Elijah Mayfield, David Adamson, and Carolyn Penstein Ros´e. 2012a. Hierarchical conversation structure prediction in multi-party chat. In Proceedings of SIGDIAL Meeting on Discourse and Dialogue. Elijah Mayfield, Miaomiao Wen, Mitch Golant, and Carolyn Penstein Ros´e. 2012b. Discovering habits of effective online support group chatrooms. In ACM Conference on Supporting Group Work. Kanayo Ogura, Takashi Kusumi, and Asako Miura. 2008. Analysis of community development using chat logs: A virtual support group of cancer patients. In Proceedings of the IEEE Symposium on Universal Communication. Jason E. Owen, Erin O’Carroll Bantum, and Mitch Golant. 2008. Benefits and challenges experienced by professional facilitators of online support groups for cancer survivors. In Psycho-Oncology. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the Association for Computational Linguistics. Bambang Parmanto, Paul Munro, and Howard R Doyle. 1996. Improving committee diagnosis with resampling techniques. In Proceedings of NIPS. James W Pennebaker and J D Seagal. 1999. Forming a story: The health benefits of narrative. Journal of Clinical Psychology. Bernardine M Pinto, Nancy C Maruyama, Matthew M Clark, Dean G Cruess, Elyse Park, and Mary Roberts. 2002. Motivation to modify lifestyle risk behaviors in women treated for breast cancer. In Mayo Clinic Proceedings. Andrei Popescu-Belis. 2008. Dimensionality of dialogue act tagsets: An empirical analysis of large corpora. In Language Resources and Evaluation. Vinodkumar Prabhakaran, Owen Rambow, and Mona Diab. 2012. Predicting overt display of power in written dialogs. In Proceedings of NAACL. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of LREC. Megan L Robbins, Elizabeth S Focella, Shelley Kasle, Ana Mar´ıa L´opez, Karen L Weihs, and Matthias R Mehl. 2011. Naturalistically observed swearing, emotional support, and depressive symptoms in women coping with illness. Health Psychology, 30:789. Carolyn Penstein Ros´e, Yi-Chia Wang, Yue Cui, Jaime Arguello, Karsten Stegmann, Armin Weinberger, and Frank Fischer. 2008. Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning. In International Journal of Computer Supported Collaborative Learning. Nikol Rummel, Armin Weinberger, Christof Wecker, Frank Fischer, Anne Meier, Eleni Voyiatzaki, George Kahrimanis, Hans Spada, Nikolaos Avouris, and Erin Walker. 2008. New challenges in cscl: Towards adaptive script support. In Proceedings of ICLS. Christina Sauper and Regina Barzilay. 2009. Automatically generating wikipedia articles: A structureaware approach. In Proceedings of ACL. Emanuel A Schegloff. 1968. Sequencing in conversational openings. American Anthropologist. Oliver G Selfridge. 1958. Pandemonium: a paradigm for learning. In Proceedings of Symposium on Mechanisation of Thought Processes, National Physical Laboratory. Gerry Stahl. 2012. Interaction analysis of a biology chat. Productive multivocality. Lee H Staples. 1990. Powerful ideas about empowerment. Administration in Social Work. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of Language and Social Psychology. Simone Teufel and Marc Moens. 2002. Summarizing scientic articles: Experiments with relevance and rhetorical status. Computational Linguistics. C F Van Uden-Kraan, C H C Drossaert, E Taal, E R Seydel, and M A F J Van de Laar. 2009. Participation in online patient support groups endorses patients empowerment. Patient Education and Counseling. R Vauth, B Kleim, M Wirtz, and P W Corrigan. 2007. Self-efficacy and empowerment as outcomes of selfstigmatizing and coping in schizophrenia. Psychiatry Research. Ingrid Wahlin, Anna-Christina Ek, and Ewa Idvali. 2006. Patient empowerment in intensive carean interview study. Intensive and Critical Care Nursing. William Yang Wang, Samantha Finkelstein, Amy Ogan, Alan Black, and Justine Cassell. 2012. “love ya, jerkface:” using sparse log-linear models to build positive (and impolite) relationships with teens. In Proceedings of SIGDIAL. Gary M Weiss and Haym Hirsh. 1998. Learning to predict rare events in event sequences. In Proceedings of KDD. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation. 113
2013
11
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1116–1126, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Online Relative Margin Maximization for Statistical Machine Translation Vladimir Eidelman Computer Science and UMIACS University of Maryland College Park, MD [email protected] Yuval Marton Microsoft City Center Plaza Bellevue, WA [email protected] Philip Resnik Linguistics and UMIACS University of Maryland College Park, MD [email protected] Abstract Recent advances in large-margin learning have shown that better generalization can be achieved by incorporating higher order information into the optimization, such as the spread of the data. However, these solutions are impractical in complex structured prediction problems such as statistical machine translation. We present an online gradient-based algorithm for relative margin maximization, which bounds the spread of the projected data while maximizing the margin. We evaluate our optimizer on Chinese-English and ArabicEnglish translation tasks, each with small and large feature sets, and show that our learner is able to achieve significant improvements of 1.2-2 BLEU and 1.7-4.3 TER on average over state-of-the-art optimizers with the large feature set. 1 Introduction The desire to incorporate high-dimensional sparse feature representations into statistical machine translation (SMT) models has driven recent research away from Minimum Error Rate Training (MERT) (Och, 2003), and toward other discriminative methods that can optimize more features. Examples include minimum risk (Smith and Eisner, 2006), pairwise ranking (PRO) (Hopkins and May, 2011), RAMPION (Gimpel and Smith, 2012), and variations of the margin-infused relaxation algorithm (MIRA) (Watanabe et al., 2007; Chiang et al., 2008; Cherry and Foster, 2012). While the objective function and optimization method vary for each optimizer, they can all be broadly described as learning a linear model, or parameter vector w, which is used to score alternative translation hypotheses. In every SMT system, and in machine learning in general, the goal of learning is to find a model that generalizes well, i.e. one that will yield good translations for previously unseen sentences. However, as the dimension of the feature space increases, generalization becomes increasingly difficult. Since only a small portion of all (sparse) features may be observed in a relatively small fixed set of instances during tuning, we are prone to overfit the training data. An alternative approach for solving this problem is estimating discriminative feature weights directly on the training bitext (Tillmann and Zhang, 2006; Blunsom et al., 2008; Simianer et al., 2012), which is usually substantially larger than the tuning set, but this is complementary to our goal here of better generalization given a fixed size tuning set. In order to achieve that goal, we need to carefully choose what objective to optimize, and how to perform parameter estimation of w for this objective. We focus on large-margin methods such as SVM (Joachims, 1998) and passive-aggressive algorithms such as MIRA. Intuitively these seek a w such that the separating distance in geometric space of two hypotheses is at least as large as the cost incurred by selecting the incorrect one. This criterion performs well in practice at finding a linear separator in high-dimensional feature spaces (Tsochantaridis et al., 2004; Crammer et al., 2006). Now, recent advances in machine learning have shown that the generalization ability of these learners can be improved by utilizing second order information, as in the Second Order Perceptron (Cesa-Bianchi et al., 2005), Gaussian Margin Machines (Crammer et al., 2009b), confidenceweighted learning (Dredze and Crammer, 2008), AROW (Crammer et al., 2009a; Chiang, 2012) and Relative Margin Machines (RMM) (Shivaswamy and Jebara, 2009b). The latter, RMM, was introduced as an effective and less computationally expensive way to incorporate the spread of the data – second order information about the 1116 distance between hypotheses when projected onto the line defined by the weight vector w. Unfortunately, not all advances in machine learning are easy to apply to structured prediction problems such as SMT; the latter often involve latent variables and surrogate references, resulting in loss functions that have not been well explored in machine learning (Mcallester and Keshet, 2011; Gimpel and Smith, 2012). Although Shivaswamy and Jebara extended RMM to handle sequential structured prediction (Shivaswamy and Jebara, 2009a), their batch approach to quadratic optimization, using existing off-the-shelf QP solvers, does not provide a practical solution: as Taskar et al. (2006) observe, “off-the-shelf QP solvers tend to scale poorly with problem and training sample size” for structured prediction problems.. This motivates an online gradient-based optimization approach—an approach that is particularly attractive because its simple update is well suited for efficiently processing structured objects with sparse features (Crammer et al., 2012). The contributions of this paper include (1) introduction of a loss function for structured RMM in the SMT setting, with surrogate reference translations and latent variables; (2) an online gradientbased solver, RM, with a closed-form parameter update to optimize the relative margin loss; and (3) an efficient implementation that integrates well with the open source cdec SMT system (Dyer et al., 2010).1 In addition, (4) as our solution is not dependent on any specific QP solver, it can be easily incorporated into practically any gradientbased learning algorithm. After background discussion on learning in SMT (§2), we introduce a novel online learning algorithm for relative margin maximization suitable for SMT (§3). First, we introduce RMM (§3.1) and propose a latent structured relative margin objective which incorporates cost-augmented hypothesis selection and latent variables. Then, we derive a simple closed-form online update necessary to create a large margin solution while simultaneously bounding the spread of the projection of the data (§3.2). Chinese-English translation experiments show that our algorithm, RM, significantly outperforms strong state-of-the-art optimizers, in both a basic feature setting and high-dimensional (sparse) feature space (§4). Additional ArabicEnglish experiments further validate these results, 1https://github.com/veidel/cdec even where previously MERT was shown to be advantageous (§5). Finally, we discuss the spread and other key issues of RM (§6), and conclude with discussion of future work (§7). 2 Learning in SMT Given an input sentence in the source language x ∈X, we want to produce a translation y ∈Y(x) using a linear model parameterized by a weight vector w: (y∗, d∗) = arg max (y,d)∈Y(x),D(x) w⊤f(x, y, d) where w⊤f(x, y, d) is the weighted feature scoring function, hereafter s(x, y, d), and Y(x) is the space of possible translations of x. While many derivations d ∈D(x) can produce a given translation, we are only able to observe y; thus we model d as a latent variable. Although our models are actually defined over derivations, they are always paired with translations, so our feature function f(x, y, d) is defined over derivation–translation pairs.2 The learning goal is then to estimate w. The instability of MERT in larger feature sets (Foster and Kuhn, 2009; Hopkins and May, 2011), has motivated many alternative tuning methods for SMT. These include strategies based on batch log-linear models (Tillmann and Zhang, 2006; Blunsom et al., 2008), as well as the introduction of online linear models (Liang et al., 2006a; Arun and Koehn, 2007). Recent batch optimizers, PRO and RAMPION, and Batch-MIRA (Cherry and Foster, 2012), have been partly motivated by existing MT infrastructures, as they iterate between decoding the entire tuning set and optimizing the parameters. PRO considers tuning a classification problem and employs a binary classifier to rank pairs of outputs. RAMPION aims to address the disconnect between MT and machine learning by optimizing a structured ramp loss with a concave-convex procedure. 2.1 Large-Margin Learning Online large-margin algorithms, such as MIRA, have also gained prominence in SMT, thanks to their ability to learn models in high-dimensional feature spaces (Watanabe et al., 2007; Chiang et al., 2009). The usual presentation of MIRA’s optimization problem is given as a quadratic program: 2We may omit d in some equations for clarity. 1117 wt+1 = arg min w 1 2||w −wt||2 + Cξi s.t. s(xi, yi, d) −s(xi, y′, d) ≥∆i(y′) −ξi (1) where y′ is the single most violated constraint, the cost ∆i(y) is computed using an external measure of quality, such as 1-BLEU(yi, y), and a slack variable ξi is introduced to allow for non-separable instances. C acts as a regularization parameter, trading off between margin maximization and constraint violations. While solving the optimization problem relies on computing the margin between the correct output yi, and y′, in SMT our decoder is often incapable of producing the reference translation, i.e. yi /∈Y(xi). We must instead resort to selecting a surrogate reference, y+ ∈Y(xi). This issue has recently received considerable attention (Liang et al., 2006a; Eidelman, 2012; Chiang, 2012), with preference given to surrogate references obtained through cost-diminished hypothesis selection. Thus, y+ is selected based on a combination of model score and error metric from the k-best list produced by our current model. A similar selection is made for the cost-augmented hypothesis y−∈Y(xi): (y+, d+) ← arg max (y,d)∈Y(xi),D(xi) s(xi, y, d) −∆i(y) (y−, d−) ← arg max (y,d)∈Y(xi),D(xi) s(xi, y, d) + ∆i(y) In this setting, the optimization problem becomes: wt+1 = arg min w 1 2||w −wt||2 + Cξi s.t. δs(xi, y+, y−) ≥∆i(y−) −∆i(y+) −ξi (2) where δs(xi, y+, y−)=s(xi, y+, d+)-s(xi, y−, d−) This leads to a variant of the structured ramp loss to be optimized: ℓ= − max (y+,d+)∈Y(xi),D(xi) s(xi, y+, d+) −∆i(y+)  + max (y−,d−)∈Y(xi),D(xi) s(xi, y−, d−) + ∆i(y−)  (3) The passive-aggressive update (Crammer et al., 2006), which is used to solve this problem, updates w on each round such that the score of the correct hypothesis y+ is greater than the score of the incorrect y−by a margin at least as large as the cost incurred by predicting the incorrect hypothesis, while keeping the change to w small. (a) (b) Figure 1: (a) RM and large margin solution comparison and (b) the spread of the projections given by each. RM and large margin solutions are shown with a darker dotted line and a darker solid line, respectively. 3 The Relative Margin Machine in SMT 3.1 Relative Margin Machine The margin, the distance between the correct hypothesis and incorrect one, is defined by s(xi, y+, d+) and s(xi, y−, d−). It is maximized by minimizing the norm in SVM, or analogously, the proximity constraint in MIRA: arg minw 1 2||w −wt||2. However, theoretical results supporting large-margin learning, such as the VC-dimension (Vapnik, 1995) or the Rademacher bound (Bartlett and Mendelson, 2003) consider measures of complexity, in addition to the empirical performance, when describing future predictive ability. The measures of complexity usually take the form of some value on the radius of the data, such as the ratio of the radius of the data to the margin (Shivaswamy and Jebara, 2009a). As radius is a way of measuring spread in any projection direction, here we will specifically be interested in the the spread of the data as measured after the projection defined by the learned model w. More formally, the spread is the distance between y+, and the worst candidate (yw, dw) ←arg min(y,d)∈Y(xi),D(xi) s(xi, y, d), after projecting both onto the line defined by the weight vector w. For each y′, this projection is conveniently given by s(xi, y′, d), thus the spread is calculated as δs(xi, y+, yw). RMM was introduced as a generalization over SVM that incorporates both the margin constraint 1118 and information regarding the spread of the data. The relative margin is the ratio of the absolute, or maximum margin, to the spread of the projected data. Thus, the RMM learns a large margin solution relative to the spread of the data, or in other words, creates a max margin while simultaneously bounding the spread of the projected data. As a concrete example, consider the plot shown in Figure 1(a), with hypotheses represented by two-dimensional feature vectors. The point marked with a circle in the upper right represents f(xi, y+), while all other squares represent alternative incorrect hypotheses f(xi, y′). The large margin decision boundary is shown with a darker solid line, while the relative margin solution is shown with a darker dotted line. The lighter lines parallel to each define the margins, with the square at the intersection being f(xi, y−). The bottom portion of Figure 1(b) presents an alternative view of each solution, showing the projections of the hypotheses given the learned model of each. Notice that with a large margin solution, although the distance between y+ and y−is greater, the points are highly spread, extending far to the left of the decision boundary. In contrast, with a relative margin, although we have a smaller absolute margin, the spread is smaller, all points being within a smaller distance ϵ of the decision boundary. The higher the spread of the projection, the higher the variance of the projected points, and the greater the likelihood that we will mislabel a new instance, since the high variance projections may cross the learned decision boundary. In higher dimensions, accounting for the spread becomes even more crucial, as will be discussed in Section 6.3 Although RMM is theoretically well-founded and improves practical performance over largemargin learning in the settings where it was introduced, it is unsuitable for most complex structured prediction in NLP. Nonetheless, since structured RMM is a generalization of Structured SVM, which shares its underlying objective with MIRA, our intuition is that SMT should be able to benefit as well. But to take advantage of the second-order information RMM utilizes for increased generalizability in SMT, we need a computationally effi3The motivation of confidence-weighted estimation (Dredze and Crammer, 2008) and AROW (Crammer et al., 2009a) is related in spirit. They use second-order information in the form of a distribution over weights to change the maximum margin solution. cient optimization procedure that does not require batch training or an off-the-shelf QP solver. 3.2 RM Algorithm We address the above-mentioned limitations by introducing a novel online learning algorithm for relative margin maximization, RM. The relative margin solution is obtained by maximizing the same margin as Equation (2), but now with respect to the distance between y+, and the worst candidate yw. Thus, the relative margin dictates trading-off between a large margin as before, and a small spread of the projection, in other words, bounding the distance between y+ and yw. The additional computation required, namely, obtaining yw, is efficient to perform, and has likely already happened while obtaining the k-best derivations necessary for the margin update. The online latent structured soft relative margin optimization problem is then: wt+1 = arg min w 1 2||w −wt||2 + Cξi + Dτi s.t.: δs(xi, y+, y−) ≥∆i(y−) −∆i(y+) −ξi −B −τi ≤δs(xi, y+, yw) ≤B + τi (4) where additional bounding constraints are added to the usual margin constraints in order to contain the spread by bounding the difference in projections. B is an additional parameter; it controls the spread, trading off between margin maximization and spread minimization. Notice that when B →∞, the bounding constraints disappear, and we are left with the original problem in Equation (2). D, which plays an analogous role to C, allows penalized violations of the bounding constraints. The dual of Equation (4) can be derived as: max α,β,β∗L = X y∈Y(xi) αy −B X y∈Y(xi) βy −B X y∈Y(xi) β∗ y −1 2  X y∈Y(xi) αyωi(y+, y) − X y∈Y(xi) βyωi(y+, y) + X y∈Y(xi) β∗ yωi(y+, y), X y′∈Y(xj) αy′ωj(y+, y′) − X y′∈Y(xj) βy′ωj(y+, y′) + X y′∈Y(xj) β∗ y′ωj(y+, y′)  (5) where the α Lagrange multiplier corresponds to the standard margin constraint, while β and 1119 β∗each correspond to a bounding constraint, and ωi(y+, y′) corresponds to the difference of f(xi, y+, d+) and f(xi, y′, d′). The weight update can then be obtained from the dual variables: X αyωi(y+, y) − X βyωi(y+, y) + X β∗ yωi(y+, y) (6) The dual in Equation (5) can be optimized using a cutting plane algorithm, an effective method for solving a relaxed optimization problem in the dual, used in Structured SVM, MIRA, and RMM (Tsochantaridis et al., 2004; Chiang, 2012; Shivaswamy and Jebara, 2009a). The cutting plane presented in Alg. 1 decomposes the overall problem into subproblems which are solved independently by creating working sets Sj i , which correspond to the largest violations of either the margin constraint, or bounding constraints, and iteratively satisfying the constraints in each set. The cutting plane in Alg. 1 makes use of the the closed-form gradient-based updates we derived for RM presented in Alg. 2. The updates amount to performing a subgradient descent step to update w in accordance with the constraints. Since the constraint matrix of the dual program is not strictly decomposable across constraint types, we are in effect solving an approximation of the original problem. Algorithm 1 RM Cutting Plane Algorithm (adapted from (Shivaswamy and Jebara, 2009a)) Require: ith training example (xi, yi), weight w, margin reg. C, bound B, bound reg. D, ϵ, ϵB 1: S1 i ←  y+ , S2 i ←  y+ , S3 i ←  y+ 2: repeat 3: H(y) := ∆i(y) −∆i(y+) −δs(xi, y+, y) 4: y1 ←arg maxy∈Y(xi) H(y) 5: y2 ←arg maxy∈Y(xi) G(y) := δs(xi, y+, y) 6: y3 ←arg miny∈Y(xi) −G(y) 7: ξ ←max {0, maxy∈Si H(y)} 8: V1 ←H(y1) −ξ −ϵ 9: V2 ←G(y2) −B −ϵB 10: V3 ←−G(y3) −B −ϵB 11: j ←arg maxj′∈{1,2,3} Vj′ 12: if Vj > 0 then 13: Sj i ←Sj i ∪{yj} 14: OPTIMIZE(w, S1 i , S2 i , S3 i , C, B) ▷see Alg. 2 15: end if 16: until S1 i , S2 i , S3 i do not change Alternatively, we could utilize a passiveaggressive updating strategy (Crammer et al., 2006), which would simply bypass the cutting plane and select the most violated constraint for Algorithm 2 RM update with α, β, β∗ 1: procedure OPTIMIZE(w, S1 i , S2 i , S3 i , C, B) 2: while w changes do 3: if S1 i > 1 then 4: UPDATEMARGIN(w, S1 i , C) 5: end if 6: if S2 i > 1 then 7: UPDATEUPPERBOUND(w, S2 i , B) 8: end if 9: if S3 i > 1 then 10: UPDATELOWERBOUND(w, S3 i , B) 11: end if 12: end while 13: end procedure 14: procedure UPDATEMARGIN(w, S1 i , C) 15: αy ←0 for all y ∈S1 i 16: αy+ i ←C 17: for n ←1...MaxIter do 18: Select two constraints y, y′ from S1 i 19: γα ←∆i(y′)−∆i(y)−δs(xi, y, y′) ||ω(y,y′)||2 20: γα ←max(−αy, min(αy′, γα)) 21: αy ←αy + γα ; α′ y ←α′ y −γα 22: w ←w + γα(ω(y, y′)) 23: end for 24: end procedure 25: procedure UPDATEUPPERBOUND(w, S2 i , B) 26: βy ←0 for all y ∈S2 i 27: for n ←1...MaxIter do 28: Select one constraint y from S2 i 29: γβ ←max(0, B−δs(xi,y+,y) ||ω(y+,y)||2 ) 30: βy ←βy + γβ 31: w ←w −γβ(ω(y+, y)) 32: end for 33: end procedure 34: procedure UPDATELOWERBOUND(w, S3 i , B) 35: β∗ y ←0 for all y ∈S3 i 36: for n ←1...MaxIter do 37: Select one constraint y from S3 i 38: γβ∗←max(0, −B−δs(xi,y+,y) ||ω(y+,y)||2 ) 39: β∗ y ←β∗ y + γβ∗ 40: w ←w + γβ∗(ω(y+, y)) 41: end for 42: end procedure each set, if there is one, and perform the corresponding parameter updates in Alg. 2. We refer to the resulting passive-aggressive algorithm as RM-PA, and the cutting plane version as RM-CP. Preliminary experiments showed that RM-PA performs on par with RM-CP, thus RM-PA is the one used in the empirical evaluation below. A graphical depiction of the passive-aggressive RM update is presented in Figure 2. The upper right circle represents y+, while all other squares represent alternative hypotheses y′. As in the standard MIRA solution, we select the maximum margin constraint violator, y−, shown as the triangle, and update such that the margin is greater than the cost. Additionally, we select the maximum bound1120 Bounding Constraint dist cost > margin BLEU Score Margin Constraint cost margin Model Score dist > B B Figure 2: RM update with margin and bounding constraints. The diagonal dotted line depicts cost–margin equilibrium. The vertical gray dotted line depicts the bound B. White arrows indicate updates triggered by constraint violations. Squares are data points in the k-best list not selected for update in this round. task Corpus Sentences Tokens En Zh/Ar Zh-En training 1.6M 44.4M 40.4M tune (MT06) 1664 48k 39k MT03 919 28k 24k MT05 1082 35k 33k Ar-En training 1M 23.7M 22.8M tune (MT06) 1797 55k 49k MT05 1056 36k 33k MT08 1360 51k 45k 4-gram LM 24M 600M – Table 1: Corpus statistics ing constraint violator, yw, shown as the upsidedown triangle, and update so the distance from y+ is no greater than B. 4 Experiments 4.1 Setup To evaluate the advantage of explicitly accounting for the spread of the data, we conducted several experiments on two Chinese-English translation test sets, using two different feature sets in each. For training we used the non-UN and non-HK Hansards portions of the NIST training corpora, which was segmented using the Stanford segmenter (Tseng et al., 2005). The data statistics are summarized in the top half of Table 1. The English data was lowercased, tokenized and aligned using GIZA++ (Och and Ney, 2003) to obtain bidirectional alignments, which were symmetrized using the grow-diag-final-and method (Koehn et al., 2003). We trained a 4-gram LM on the English side of the corpus with additional words from non-NYT and non-LAT, randomly selected portions of the Gigaword v4 corpus, using modified Kneser-Ney smoothing (Chen and Goodman, 1996). We used cdec (Dyer et al., 2010) as our hierarchical phrase-based decoder, and tuned the parameters of the system to optimize BLEU (Papineni et al., 2002) on the NIST MT06 corpus. We applied several competitive optimizers as baselines: hypergraph-based MERT (Kumar et al., 2009), k-best variants of MIRA (Crammer et al., 2006; Chiang et al., 2009), PRO (Hopkins and May, 2011), and RAMPION (Gimpel and Smith, 2012). The size of the k-best list was set to 500 for RAMPION, MIRA and RM, and 1500 for PRO, with both PRO and RAMPION utilizing k-best aggregation across iterations. RAMPION settings were as described in (Gimpel and Smith, 2012), and PRO settings as described in (Hopkins and May, 2011), with PRO requiring regularization tuning in order to be competitive with the other optimizers. MIRA and RM were run with 15 parallel learners using iterative parameter mixing (McDonald et al., 2010). All optimizers were implemented in cdec and use the same system configuration, thus the only independent variable is the optimizer itself. We set C to 0.01, and MaxIter to 100. We selected the bound step size D, based on performance on a held-out dev set, to be 0.01 for the basic feature set and 0.1 for the sparse feature set. The bound constraint B was set to 1.4 The approximate sentence-level BLEU cost ∆i is computed in a manner similar to (Chiang et al., 2009), namely, in the context of previous 1-best translations of the tuning set. All results are averaged over 3 runs. 4.2 Feature Sets We experimented with a small (basic) feature set, and a large (sparse) feature set. For the small feature set, we use 14 features, including a language model, 5 translation model features, penalties for unknown words, the glue rule, and rule arity. For experiments with a larger feature set, we introduced additional lexical and non-lexical sparse Boolean features of the form commonly found in the literature (Chiang et al., 2009; Watan4We also conducted an investigation into the setting of the B parameter. We explored alternative values for B, as well as scaling it by the current candidate’s cost, and found that the optimizer is fairly insensitive to these changes, resulting in only minor differences in BLEU. 1121 Optimizer Zh Ar MIRA 35k 37k PRO 95k 115k RAMPION 22k 24k RM 30k 32k Active+Inactive 3.4M 4.9M Table 2: Active sparse feature templates abe et al., 2007; Simianer et al., 2012). Non-lexical features include structural distortion, which captures the dependence between reordering and the size of a filler, and rule shape, which bins grammar rules by their sequence of terminals and nonterminals (Chiang et al., 2008). Lexical features on rules include rule ID, which fires on a specific grammar rule. We also introduce context-dependent lexical features for the 300 most frequent aligned word pairs (f,e) in the training corpus, which fire on triples (f,e,f+1) and (f,e,f−1), capturing when we see f aligned to e, with f+1 and f−1 occurring to the right or left of f, respectively. All other words fall into the default ⟨unk⟩feature bin. In addition, we have insertion and deletion features for the 150 most frequently unaligned target and source words. These feature templates resulted in a total of 3.4 million possible features, of which only a fraction were active for the respective tuning set and optimizer, as shown in Table 2. 4.3 Results As can be seen from the results in Table 3, our RM method was the best performer in all ChineseEnglish tests according to all measures – up to 1.9 BLEU and 6.6 TER over MIRA – even though we only optimized for BLEU.5 Surprisingly, it seems that MIRA did not benefit as much from the sparse features as RM. The results are especially notable for the basic feature setting – up to 1.2 BLEU and 4.6 TER improvement over MERT – since MERT has been shown to be competitive with small numbers of features compared to high-dimensional optimizers such as MIRA (Chiang et al., 2008). For the tuning set, the decoder performance was consistently the lowest with RM, compared to the 5In the small feature set RAMPION yielded similar best BLEU scores, but worse TER. In preliminary experiments with a smaller trigram LM, our RM method consistently yielded the highest scores in all Chinese-English tests – up to 1.6 BLEU and 6.4 TER from MIRA, the second best performer. other optimizers. We believe this is due to the RM bounding constraint being more resistant to overfitting the training data, and thus allowing for improved generalization. Conversely, while PRO had the second lowest tuning scores, it seemed to display signs of underfitting in the basic and large feature settings. 5 Additional Experiments In order to explore the applicability of our approach to a wider range of languages, we also evaluated its performance on Arabic-English translation. All experimental details were the same as above, except those noted below. For training, we used the non-UN portion of the NIST training corpora, which was segmented using an HMM segmenter (Lee et al., 2003). Dataset statistics are given in the bottom part of Table 1. The sparse feature templates resulted here in a total of 4.9 million possible features, of which again only a fraction were active, as shown in Table 2. As can be seen in Table 4, in the smaller feature set, RM and MERT were the best performers, with the exception that on MT08, MIRA yielded somewhat better (+0.7) BLEU but a somewhat worse (-0.9) TER score than RM. On the large feature set, RM is again the best performer, except, perhaps, a tied BLEU score with MIRA on MT08, but with a clear 1.8 TER gain. In both Arabic-English feature sets, MIRA seems to take the second place, while RAMPION lags behind, unlike in Chinese-English (§4).6 Interestingly, RM achieved substantially higher BLEU precision scores in all tests for both language pairs. However, this was also usually coupled had a higher brevity penalty (BP) than MIRA, with the BP increasing slightly when moving to the sparse setting. 6 Discussion The trend of the results, summarized as RM gain over other optimizers averaged over all test sets, is presented in Table 5. RM shows clear advantage in both basic and sparse feature sets, over all other state-of-the-art optimizers. The RM gains are notably higher in the large feature set, which we take 6In our preliminary experiments with the smaller trigram LM, MERT did better on MT05 in the smaller feature set, and MIRA had a small advantage in two cases. RAMPION performed similarly to RM on the smaller feature set. RM’s loss was only up to 0.8 BLEU (0.7 TER) from MERT or MIRA, while its gains were up to 1.7 BLEU and 2.1 TER over MIRA. 1122 Small (basic) feature set Large (sparse) feature set Optimizer Tune MT03 MT05 Tune MT03 MT05 ↑BLEU ↑BLEU ↓TER ↑BLEU ↓TER ↑BLEU ↑BLEU ↓TER ↑BLEU ↓TER MERT 35.4 35.8 60.8 32.4 63.9 MIRA 35.5 35.8 61.1 32.1 64.6 36.6 35.9 60.6 32.1 64.1 PRO 34.1 36.0 60.2 31.7 63.4 35.7 34.8 56.1 31.4 59.1 RAMPION 35.1 36.5 58.6 33.0 61.3 36.7 36.9 57.7 33.3 60.6 RM 31.3 36.5 56.4 33.6 59.3 33.2 37.5 54.6 34.0 57.5 Table 3: Performance on Zh-En with basic (left) and sparse (right) feature sets on MT03 and MT05. Small (basic) feature set Large (sparse) feature set Optimizer Tune MT05 MT08 Tune MT05 MT08 ↑BLEU ↑BLEU ↓TER ↑BLEU ↓TER ↑BLEU ↑BLEU ↓TER ↑BLEU ↓TER MERT 43.8 53.3 40.2 41.0 50.7 MIRA 43.0 52.8 40.8 41.3 50.6 44.4 53.4 40.1 41.8 50.2 PRO 41.5 51.3 41.5 39.4 51.5 46.8 53.2 40.0 41.4 49.7 RAMPION 42.4 52.0 40.8 40.0 50.8 44.6 52.9 40.4 41.0 50.4 RM 38.5 53.3 39.8 40.6 49.7 43.0 55.3 37.5 41.8 48.4 Table 4: Performance on Ar-En with basic (left) and sparse (right) feature sets on MT05 and MT08. Small set Large set Optimizer BLEU TER BLEU TER MERT 0.4 2.6 MIRA 0.5 3.0 1.4 4.3 PRO 1.4 2.9 2.0 1.7 RAMPION 0.6 1.6 1.2 2.8 Table 5: RM gain over other optimizers averaged over all test sets. as an indication for the importance of bounding the spread. Spread analysis: For RM, the average spread of the projected data in the Chinese-English small feature set was 0.9±3.6 for all tuning iterations, and 0.7±2.9 for the iteration with the highest decoder performance. In comparison, the spread of the data for MIRA was 5.9±20.5 for the best iteration. In the sparse setting, RM had an average spread of 0.9±2.4 for the best iteration, while MIRA had a spread of 14.0±31.1. Similarly, on Arabic-English, RM had a spread of 0.7±2.4 in the small setting, and 0.82±1.4 in the sparse setting, while MIRA’s spread was 9.4±26.8 and 11.4±22.1, for the small and sparse settings, respectively. Notice that the average spread for RM stays about the same when moving to higher dimensions, with the variance decreasing in both cases. For MIRA, however, the average spread increases in both cases, with the variance being much higher than RM. For instance, observe that the spread of MIRA on Chinese grows from 5.9 to 14.0 in the sparse feature setting. While bounding the spread is useful in the low-dimensional setting (0.7-1.5 BLEU gain with RM over MIRA as shown in Table 3), accounting for the spread is even more crucial with sparse features, where MIRA gains only up to 0.1 BLEU, while RM gains 1 BLEU. These results support the claim that our imposed bound B indeed helps decrease the spread, and that, in turn, lower spread yields better generalization performance. Error Analysis: The inconclusive advantage of RM over MIRA (in BLEU vs. TER scores) on Arabic-English MT08 calls for a closer look. Therefore we conducted a coarse error analysis on 15 randomly selected sentences from MERT, RMM and MIRA, with basic and sparse feature settings for the latter two. This sample yielded 450 data points for analysis: output of the 5 conditions on 15 sentences scored in 6 violation categories. The categories were: function word drop, content word drop, syntactic error (with a reasonable meaning), semantic error (regardless of syntax), word order issues, and function word mistranslation and “hallucination”. The purpose of this analysis was to get a qualitative feel for the output of each model, and a better idea as to why we obtained performance improvements. RM no1123 ticeably had more word order and excess/wrong function word issues in the basic feature setting than any optimizer. However, RM seemed to benefit the most from the sparse features, as its bad word order rate dropped close to MIRA, and its excess/wrong function word rate dropped below that of MIRA with sparse features (MIRA’s rate actually doubled from its basic feature set). We conjecture both these issues will be ameliorated with syntactic features such as those in Chiang et al. (2008). This correlates with our observation that RM’s overall BLEU score is negatively impacted by the BP, as the BLEU precision scores are noticeably higher. K-best: RM is potentially more sensitive to the size and order of the k-best list. While MIRA is only concerned with the margin between y+ and y−, RM also accounts for the distance between y+ and yw. It might be the case that a larger k-best, or revisiting previous strategies for y+ and y−selection, such as bold updating, local updating (Liang et al., 2006b), or max-BLEU updating (Tillmann and Zhang, 2006) might have a greater impact. Also, we only explored several settings of B, and there remains a continuum of RM solutions that trade off between margin and spread in different ways. Active features: Perhaps contrary to expectation, we did not see evidence of a correlation between the number of active features and optimizer performance. RAMPION, with the fewest features, is the closest performer to RM in Chinese, while MIRA, with a greater number, is the closest on Arabic. We also notice that while PRO had the lowest BLEU scores in Chinese, it was competitive in Arabic with the highest number of features. 7 Conclusions and Future Work We have introduced RM, a novel online marginbased algorithm designed for optimizing highdimensional feature spaces, which introduces constraints into a large-margin optimizer that bound the spread of the projection of the data while maximizing the margin. The closed-form online update for our relative margin solution accounts for surrogate references and latent variables. Experimentation in statistical MT yielded significant improvements over several other stateof-the-art optimizers, especially in a highdimensional feature space (up to 2 BLEU and 4.3 TER on average). Overall, RM achieves the best or comparable performance according to two scoring methods in two language pairs, with two test sets each, in small and large feature settings. Moreover, across conditions, RM always yielded the best combined TER-BLEU score.7 These improvements are achieved using standard, relatively small tuning sets, contrasted with improvements involving sparse features obtained using much larger tuning sets, on the order of hundreds of thousands of sentences (Liang et al., 2006a; Tillmann and Zhang, 2006; Blunsom et al., 2008; Simianer et al., 2012). Since our approach is complementary to scaling up the tuning data, in future work we intend to combine these two methods. In future work we also intend to explore using additional sparse features that are known to be useful in translation, e.g. syntactic features explored by Chiang et al. (2008). Finally, although motivated by statistical machine translation, RM is a gradient-based method that can easily be applied to other problems. We plan to investigate its utility elsewhere in NLP (e.g. for parsing) as well as in other domains involving high-dimensional structured prediction. Acknowledgments We would like to thank Pannaga Shivaswamy for valuable discussions, and the anonymous reviewers for their comments. Vladimir Eidelman is supported by a National Defense Science and Engineering Graduate Fellowship. This work was also supported in part by the BOLT program of the Defense Advanced Research Projects Agency, Contract HR0011-12-C-0015. References Abishek Arun and Philipp Koehn. 2007. Online learning methods for discriminative training of phrase based statistical machine translation. In MT Summit XI. Peter L. Bartlett and Shahar Mendelson. 2003. Rademacher and gaussian complexities: risk bounds and structural results. J. Mach. Learn. Res., 3:463– 482, March. Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statistical machine translation. In Proceedings of ACL-08: HLT, Columbus, Ohio, June. 7We and other researchers often use 1 2(TER −BLEU) as a combined SMT quality metric. 1124 Nicol`o Cesa-Bianchi, Alex Conconi, and Claudio Gentile. 2005. A second-order perceptron algorithm. SIAM J. Comput., 34(3):640–668, March. Stanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 310–318. Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Proceedings of NAACL. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Waikiki, Honolulu, Hawaii. David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 218–226. David Chiang. 2012. Hope and fear for discriminative training of statistical translation models. J. Machine Learning Research. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. J. Mach. Learn. Res., 7:551–585. Koby Crammer, Alex Kulesza, and Mark Dredze. 2009a. Adaptive regularization of weight vectors. In Advances in Neural Information Processing Systems 22, pages 414–422. Koby Crammer, Mehryar Mohri, and Fernando Pereira. 2009b. Gaussian margin machines. Journal of Machine Learning Research - Proceedings Track, 5:105–112. Koby Crammer, Mark Dredze, and Fernando Pereira. 2012. Confidence-weighted linear classification for text categorization. J. Mach. Learn. Res., 98888:1891–1926, June. Mark Dredze and Koby Crammer. 2008. Confidenceweighted linear classification. In In ICML 08: Proceedings of the 25th international conference on Machine learning, pages 264–271. ACM. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of ACL System Demonstrations. Vladimir Eidelman. 2012. Optimization strategies for online large-margin learning in machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation. George Foster and Roland Kuhn. 2009. Stabilizing minimum error rate training. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 242–249, Athens, Greece, March. Association for Computational Linguistics. Kevin Gimpel and Noah A. Smith. 2012. Structured ramp loss minimization for machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics. Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352–1362, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Thorsten Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Claire N´edellec and C´eline Rouveirol, editors, European Conference on Machine Learning, pages 137–142, Berlin. Springer. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, Stroudsburg, PA, USA. Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 163–171. Young-Suk Lee, Kishore Papineni, Salim Roukos, Ossama Emam, and Hany Hassan. 2003. Language model based Arabic word segmentation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, pages 399–406. Percy Liang, Alexandre Bouchard-Cˆot´e, Dan Klein, and Ben Taskar. 2006a. An end-to-end discriminative approach to machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 761–768. Percy Liang, Alexandre Bouchard-Cˆot´e, Dan Klein, and Ben Taskar. 2006b. An end-to-end discriminative approach to machine translation. In Proceedings of the 2006 International Conference on Computational Linguistics (COLING) - the Association for Computational Linguistics (ACL). David Mcallester and Joseph Keshet. 2011. Generalization bounds and consistency for latent structural probit and ramp loss. In J. Shawe-Taylor, 1125 R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2205–2212. Ryan McDonald, Keith Hall, and Gideon Mann. 2010. Distributed training strategies for the structured perceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 456–464, Los Angeles, California. Franz Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. In Computational Linguistics, volume 29(21), pages 19–51. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Pannagadatta Shivaswamy and Tony Jebara. 2009a. Structured prediction with relative margin. In In International Conference on Machine Learning and Applications. Pannagadatta K Shivaswamy and Tony Jebara. 2009b. Relative margin machines. In In Advances in Neural Information Processing Systems 21. MIT Press. Patrick Simianer, Stefan Riezler, and Chris Dyer. 2012. Joint feature selection in distributed stochastic learning for large-scale discriminative training in smt. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Jeju Island, Korea, July. David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, Sydney, Australia, July. Association for Computational Linguistics. Ben Taskar, Simon Lacoste-Julien, and Michael I. Jordan. 2006. Structured prediction, dual extragradient and bregman projections. J. Mach. Learn. Res., 7:1627–1653, December. Christoph Tillmann and Tong Zhang. 2006. A discriminative global training algorithm for statistical MT. In Proceedings of the 2006 International Conference on Computational Linguistics (COLING) - the Association for Computational Linguistics (ACL). Huihsin Tseng, Pi-Chuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter. In Fourth SIGHAN Workshop on Chinese Language Processing. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proceedings of the twenty-first international conference on Machine learning, ICML ’04. Vladimir N. Vapnik. 1995. The nature of statistical learning theory. Springer-Verlag New York, Inc., New York, NY, USA. Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), Prague, Czech Republic, June. Association for Computational Linguistics. 1126
2013
110
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1127–1136, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Handling Ambiguities of Bilingual Predicate-Argument Structures for Statistical Machine Translation Feifei Zhai, Jiajun Zhang, Yu Zhou and Chengqing Zong National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China {ffzhai,jjzhang,yzhou,cqzong}@nlpr.ia.ac.cn Abstract Predicate-argument structure (PAS) has been demonstrated to be very effective in improving SMT performance. However, since a sourceside PAS might correspond to multiple different target-side PASs, there usually exist many PAS ambiguities during translation. In this paper, we group PAS ambiguities into two types: role ambiguity and gap ambiguity. Then we propose two novel methods to handle the two PAS ambiguities for SMT accordingly: 1) inside context integration; 2) a novel maximum entropy PAS disambiguation (MEPD) model. In this way, we incorporate rich context information of PAS for disambiguation. Then we integrate the two methods into a PASbased translation framework. Experiments show that our approach helps to achieve significant improvements on translation quality. 1 Introduction Predicate-argument structure (PAS) depicts the relationship between a predicate and its associated arguments, which indicates the skeleton structure of a sentence on semantic level. Basically, PAS agrees much better between two languages than syntax structure (Fung et al., 2006; Wu and Fung, 2009b). Considering that current syntaxbased translation models are always impaired by cross-lingual structure divergence (Eisner, 2003; Zhang et al., 2010), PAS is really a better representation of a sentence pair to model the bilingual structure mapping. However, since a source-side PAS might correspond to multiple different target-side PASs, there usually exist many PAS ambiguities during translation. For example, in Figure 1, (a) and (b) carry the same source-side PAS <[A0]1 [Pred(是)]2 [A1]3> for Chinese predicate “是”. However, in Figure 1(a), the corresponding target-side-like PAS is <[X1] [X2] [X3]>, while in Figure 1(b), the counterpart target-side-like PAS1 is <[X2] [X3] [X1]>. This is because the two PASs play different roles in their corresponding sentences. Actually, Figure 1(a) is an independent PAS, while Figure 1(b) is a modifier of the noun phrase “中国 和 俄罗斯”. We call this kind of PAS ambiguity role ambiguity. 中国 和 俄罗斯 两个 大国 是 [ A0 ]1 [ A1 ]3 [Pred]2 , being , should … two major countries [ X3 ] [X2] China and Russia [ X1 ] 应 … 防洪 首要 的 任务 是 [ A0 ]1 [ A1 ]3 [Pred]2 flood prevention is the primary mission [ X1 ] [ X2 ] [ X3 ] 奥运村 的 位置 对 运动员 是 最 好 的 [ A0 ]1 [ A1 ]3 [Pred]2 the location of the olympic village for athletes is the best [ X3 ] [X2] [ X1 ] (a) (c) (b) Figure 1. An example of ambiguous PASs. Meanwhile, Figure 1 also depicts another kind of PAS ambiguity. From Figure 1, we can see that (a) and (c) get the same source-side PAS and target-side-like PAS. However, they are different because in Figure 1(c), there is a gap string “对 运动员” between [A0] and [Pred]. Generally, the gap strings are due to the low recall of automatic semantic role labeling (SRL) or complex sentence structures. For example, in Figure 1(c), the gap string “对 运动员” is actually an argument “AM-PRP” of the PAS, but the SRL system has 1We use target-side-like PAS to refer to a list of general non-terminals in target language order, where a nonterminal aligns to a source argument. 1127 ignored it. We call this kind of PAS ambiguity gap ambiguity. During translation, these PAS ambiguities will greatly affect the PAS-based translation models. Therefore, in order to incorporate the bilingual PAS into machine translation effectively, we need to decide which target-side-like PAS should be chosen for a specific source-side PAS. We call this task PAS disambiguation. In this paper, we propose two novel methods to incorporate rich context information to handle PAS ambiguities. Towards the gap ambiguity, we adopt a method called inside context integration to extend PAS to IC-PAS. In terms of IC-PAS, the gap strings are combined effectively to deal with the gap ambiguities. As to the role ambiguity, we design a novel maximum entropy PAS disambiguation (MEPD) model to combine various context features, such as context words of PAS. For each ambiguous source-side PAS, we build a specific MEPD model to select appropriate target-side-like PAS for translation. We will detail the two methods in Section 3 and 4 respectively. Finally, we integrate the above two methods into a PAS-based translation framework (Zhai et al. 2012). Experiments show that the two PAS disambiguation methods significantly improve the baseline translation system. The main contribution of this work can be concluded as follows: 1) We define two kinds of PAS ambiguities: role ambiguity and gap ambiguity. To our best knowledge, we are the first to handle these PAS ambiguities for SMT. 2) Towards the two different ambiguities, we design two specific methods for PAS disambiguation: inside context integration and the novel MEPD model. 2 PAS-based Translation Framework PAS-based translation framework is to perform translation based on PAS transformation (Zhai et al., 2012). In the framework, a source-side PAS is first converted into target-side-like PASs by PAS transformation rules, and then perform translation based on the obtained target-side-like PASs. 2.1 PAS Transformation Rules PAS transformation rules (PASTR) are used to convert a source-side PAS into a target one. Formally, a PASTR is a triple <Pred, SP, TP>:  Pred means the predicate where the rule is extracted.  SP denotes the list of source elements in source language order.  TP refers to the target-side-like PAS, i.e., a list of general non-terminals in target language order. For example, Figure 2 shows the PASTR extracted from Figure 1(a). In this PASTR, Pred is Chinese verb “是”, SP is the source element list <[A0]1 [Pred]2 [A1]3>, and TP is the list of non-terminals <X1 X2 X3>. The same subscript in SP and TP means a one-to-one mapping between a source element and a target non-terminal. Here, we utilize the source element to refer to the predicate or argument of the source-side PAS. [X3] [X2] [A0]1 [Pred]2 [A1]3 [X1] source-side PAS(是) target-side-like PAS Figure 2. An example PASTR. 2.2 PAS Decoding The PAS decoding process is divided into 3 steps: (1) PAS acquisition: perform semantic role labeling (SRL) on the input sentences to achieve their PASs, i.e., source-side PASs; (2) Transformation: use the PASTR to match the source-side PAS i.e., the predicate Pred and the source element list SP. Then by the matching PASTRs, transform source-side PASs to targetside-like PASs. (3) Translation: in this step, the decoder first translates each source element respectively, and then a CKY-style decoding algorithm is adopted to combine the translation of each element and get the final translation of the PAS. 2.3 Sentence Decoding with the PAS-based translation framework Sometimes, the source sentence cannot be fully covered by the PAS, especially when there are several predicates. Thus to translate the whole sentence, Zhai et al. (2012) further designed an algorithm to decode the entire sentence. In the algorithm, they organized the space of translation candidates into a hypergraph. For the span covered by PAS (PAS span), a multiplebranch hyperedge is employed to connect it to the PAS’s elements. For the span not covered by PAS (non-PAS span), the decoder considers all the possible binary segmentations of it and utilizes binary hyperedges to link them. 1128 During translation, the decoder fills the spans with translation candidates in a bottom-up manner. For the PAS span, the PAS-based translation framework is adopted. Otherwise, the BTG system (Xiong et al., 2006) is used. When the span covers the whole sentence, we get the final translation result. Obviously, PAS ambiguities are not considered in this framework at all. The targetside-like PAS is selected only according to the language model and translation probabilities, without considering any context information of PAS. Consequently, it would be difficult for the decoder to distinguish the source-side PAS from different context. This harms the translation quality. Thus to overcome this problem, we design two novel methods to cope with the PAS ambiguities: inside-context integration and a maximum entropy PAS disambiguation (MEPD) model. They will be detailed in the next two sections. 3 Inside Context Integration In this section, we integrate the inside context of the PAS into PASTRs to do PAS disambiguation. Basically, a PAS consists of several elements (a predicate and several arguments), which are actually a series of continuous spans. For a specific PAS <E1,…, En>, such as the source-side PAS <[A0][Pred][A1]> in Figure 2, its controlled range is defined as: ( ) { ( ), [1, ]} i range PAS s E i n = ∀∈ where s(Ei) denotes the span of element Ei. Further, we define the closure range of a PAS. It refers to the shortest continuous span covered by the entire PAS: 0 ( ) ( ) _ min , max n j s E j s E closure range j j ∈ ∈   =     Here, E0 and En are the leftmost and rightmost element of the PAS respectively. The closure range is introduced here because adjacent source elements in a PAS are usually separated by gap strings in the sentence. We call these gap strings the inside context (IC) of the PAS, which satisfy: _ ( ) ( ( ) ( ) ) closure range PAS IC PAS range PAS = ⊕  The operator ⊕ takes a list of neighboring spans as input2, and returns their combined continuous span. As an example, towards the PAS “<[A0] [Pred][A1]>” (the one for Chinese predicate “是 (shi)”) in Figure 3, its controlled range is {[3,5],[8,8],[9,11]} and its closure range is [3,11]. The IC of the PAS is thus {[6,7]}. To consider the PAS’s IC during PAS transformation process, we incorporate its IC into the extracted PASTR. For each gap string in IC, we abstract it by the sequence of highest node categories (named as s-tag sequence). The s-tag sequence dominates the corresponding syntactic tree fragments in the parse tree. For example, in Figure 3, the s-tag sequence for span [6,8] is “PP VC”. Thus, the sequence for the IC (span [6,7]) in Figure 3 is “PP”. We combine the s-tag sequences with elements of the PAS in order. The resulting PAS is called IC-PAS, just like the left side of Figure 4(b) shows. [ A0 ] [ PP ] 奥运村3 运动员7 是8 好10 de wei-zhi ao-yun-cun 位置5 的4 对6 dui yun-dong-yuan shi 最9 的11 zui hao de NN DEC NN NP P NN PP VC AD VA DEC CP VP IP 表示1 VV biao-shi VP ,2 PU 他0 PN ta 。 PU IP DNP [Pred] [ A1 ] Figure 3. The illustration of inside context (IC). The subscript in each word refers to its position in sentence. Differently, Zhai et al. (2012) attached the IC to its neighboring elements based on parse trees. For example, in Figure 3, they would attach the gap string “对(dui) 运动员(yun-dong-yuan)” to the PAS’s element “Pred”, and then the span of “Pred” would become [6,8]. Consequently, the span [6,8] will be translated as a whole source element in the decoder. This results in a bad translation because the gap string “对(dui) 运动员 (yun-dong-yuan)” and predicate “是(shi)” should be translated separately, just as Figure 4(a) shows. Therefore, we can see that the attachment decision in (Zhai et al., 2012) is sometimes unreasonable and the IC also cannot be used for PAS disambiguation at all. In contrast, our meth 2 Here, two spans are neighboring means that the beginning of the latter span is the former span’s subsequent word in the sentence. For example, span [3,6] and [7,10] are neighboring spans. 1129 od of inside context integration is much flexible and beneficial for PAS disambiguation. (a) (b) [X1] [X2] [X4] [A0]1 [PP]2 [Pred]3 [A1]4 [X3] source-side PAS(是) target-side-like PAS 奥运村 运动员 是 好 [ A0 ]1 [ A1 ]4 [Pred]3 [the location of the olympic village]1 [for athletes]2 [is]3 [the best]4 [ PP ]2 de wei-zhi ao-yun-cun 位置 的 对 dui yun-dong-yuan shi 最 的 zui hao de Figure 4. Example of IC-PASTR. (a) The aligned span of each element of the PAS in Figure 3; (b) The extracted IC-PASTR from (a). Using the IC-PASs, we look for the aligned target span for each element of the IC-PAS. We demand that every element and its corresponding target span must be consistent with word alignment. Otherwise, we discard the IC-PAS. Afterwards, we can easily extract a rule for PAS transformation, which we call IC-PASTR. As an example, Figure 4(b) is the extracted IC-PASTR from Figure 4(a). Note that we only apply the source-side PAS and word alignment for IC-PASTR extraction. By contrast, Zhai et al. (2012) utilized the result of bilingual SRL (Zhuang and Zong, 2010b). Generally, bilingual SRL could give a better alignment between bilingual elements. However, bilingual SRL usually achieves a really low recall on PASs, about 226,968 entries in our training set while it is 882,702 by using monolingual SRL system. Thus to get a high recall for PASs, we only utilize word alignment instead of capturing the relation between bilingual elements. In addition, to guarantee the accuracy of ICPASTRs, we only retain rules with more than 5 occurrences. 4 Maximum Entropy PAS Disambiguation (MEPD) Model In order to handle the role ambiguities, in this section, we concentrate on utilizing a maximum entropy model to incorporate the context information for PAS disambiguation. Actually, the disambiguation problem can be considered as a multi-class classification task. That is to say, for a source-side PAS, every corresponding targetside-like PAS can be considered as a label. For example, in Figure 1, for the source-side PAS “[A0]1[Pred]2[A1]3”, the target-side-like PAS “[X1] [X2] [X3]” in Figure 1(a) is thus a label and “[X2] [X3] [X1]” in Figure 1(b) is another label of this classification problem. The maximum entropy model is the classical way to handle this problem: exp( ( , , ( ), ( ))) ( | , ( ), ( )) exp( ( , , ( ), ( ))) i i i tp i i i h sp tp c sp c tp P tp sp c sp c tp h sp tp c sp c tp θ θ θ ′ = ∑ ∑ ∑ where sp and tp refer to the source-side PAS (not including the predicate) and the target-side-like PAS respectively. c(sp) and c(tp) denote the surrounding context of sp and tp. hi is a binary feature function and θi is the weight of hi. We train a maximum entropy classifier for each sp via the off-the-shelf MaxEnt toolkit 3. Note that to avoid sparseness, sp does not include predicate of the PAS. Practically, the predicate serves as a feature of the MEPD model. As an example, for the rule illustrated in Figure 4(b), we build a MEPD model for its source element list sp <[A0] [PP] [Pred] [A1]>, and integrate the predicate “是(shi)” into the MEPD model as a feature. In detail, we design a list of features for each pair <sp, tp> as follows:  Lexical Features. These features include the words immediately to the left and right of sp, represented as w-1 and w+1. Moreover, the head word of each argument also serves as a lexical feature, named as hw(Ei). For example, Figure 3 shows the context of the IC-PASTR in Figure 4(b), and the extracted lexical features of the instance are: w-1=,, w+1=。, hw([A0]1)=位置 (wei-zhi), hw([A1]4)=好(hao).  POS Features. These features are defined as the POS tags of the lexical features, p-1, p+1 and phw(Ei) respectively. Thus, the corresponding POS features of Figure 4 (b) are: p-1=PU, p+1=PU, phw([A0]1)=NN, phw([A1]4)=VA.  Predicate Feature. It is the pair of source predicate and its corresponding target predicate. For example, in Figure 4(b), the source and target predicate are “是(shi)” and “is” respectively. The predicate feature is thus “PredF=是(shi)+is”. The target predicate is determined by: _ ( ) argmax ( | ) j j t range PAS t pred p t s pred ∈ = where s-pred is the source predicate and t-pred is the corresponding target predicate. 3http://homepages.inf.ed.ac.uk/lzhang10/maxent_toolkit.htm l 1130 t_range(PAS) refers to the target range covering all the words that are reachable from the PAS via word alignment. tj refers to the jth word in t_range(PAS). The utilized lexical translation probabilities are from the toolkit in Moses (Koehn et al., 2007).  Syntax Features. These features include st(Ei), i.e., the highest syntax tag for each argument, and fst(PAS) which is the lowest father node of sp in the parse tree. For example, for the rule shown in Figure 4(b), syntax features are st([A0]1)=NP, st([A1]4)=CP, and fst(PAS)=IP respectively. Using these features, we can train the MEPD model. We set the Gaussian prior to 1.0 and perform 100 iterations of the L-BFGS algorithm for each MEPD model. At last, we build 160 and 215 different MEPD classifiers, respectively, for the PASTRs and IC-PASTRs. Note that since the training procedure of maximum entropy classifier is really fast, it does not take much time to train these classifiers. 5 Integrating into the PAS-based Translation Framework In this section, we integrate our method of PAS disambiguation into the PAS-based translation framework when translating each test sentence. For inside context integration, since the format of IC-PASTR is the same to PASTR4, we can use the IC-PASTR to substitute PASTR for building a PAS-based translation system directly. We use “IC-PASTR” to denote this system. In addition, since our method of rule extraction is different from (Zhai et al., 2012), we also use PASTR to construct a translation system as the baseline system, which we call “PASTR”. On the basis of PASTR and IC-PASTR, we further integrate our MEPD model into translation. Specifically, we take the score of the MEPD model as another informative feature for the decoder to distinguish good target-side-like PASs from bad ones. The weights of the MEPD feature can be tuned by MERT (Och, 2003) together with other translation features, such as language model. 6 Related Work The method of PAS disambiguation for SMT is relevant to the previous work on context depend 4 The only difference between IC-PASTR and PASTR is that there are many syntactic labels in IC-PASTRs. ent translation. Carpuat and Wu (2007a, 2007b) and Chan et al. (2007) have integrated word sense disambiguation (WSD) and phrase sense disambiguation (PSD) into SMT systems. They combine rich context information to do disambiguation for words or phrases, and achieve improved translation performance. Differently, He et al. (2008), Liu et al. (2008) and Cui et al. (2010) designed maximum entropy (ME) classifiers to do better rule section for hierarchical phrase-based model and tree-to-string model respectively. By incorporating the rich context information as features, they chose better rules for translation and yielded stable improvements on translation quality. Our work differs from the above work in the following two aspects: 1) in our work, we focus on the problem of disambiguates on PAS; 2) we define two kinds of PAS ambiguities: role ambiguity and gap ambiguity. 3) towards the two different ambiguities, we design two specific methods for PAS disambiguation: inside context integration and the novel MEPD model. In addition, Xiong et al. (2012) proposed an argument reordering model to predict the relative position between predicates and arguments. They also combine the context information in the model. But they only focus on the relation between the predicate and a specific argument, rather than the entire PAS. Different from their work, we incorporate the context information to do PAS disambiguation based on the entire PAS. This is very beneficial for global reordering during translation (Zhai et al., 2012). 7 Experiment 7.1 Experimental Setup We perform Chinese-to-English translation to demonstrate the effectiveness of our PAS disambiguation method. The training data contains about 260K sentence pairs5. To get accurate SRL results, we ensure that the length of each sentence in the training data is among 10 and 30 words. We run GIZA++ and then employ the grow-diag-final-and (gdfa) strategy to produce symmetric word alignments. The development set and test set come from the NIST evaluation test data (from 2003 to 2005). Similar to the training set, we also only retain the sentences 5 It is extracted from the LDC corpus. The LDC category number : LDC2000T50, LDC2002E18, LDC2003E07, LDC2004T07, LDC2005T06, LDC2002L27, LDC2005T10 and LDC2005T34. 1131 whose lengths are among 10 and 30 words. Finally, the development set includes 595 sentences from NIST MT03 and the test set contains 1,786 sentences from NIST MT04 and MT05. We train a 5-gram language model with the Xinhua portion of English Gigaword corpus and target part of the training data. The translation quality is evaluated by case-insensitive BLEU-4 with shortest length penalty. The statistical significance test is performed by the re-sampling approach (Koehn, 2004). We perform SRL on the source part of the training set, development set and test set by the Chinese SRL system used in (Zhuang and Zong, 2010b). To relieve the negative effect of SRL errors, we get the multiple SRL results by providing the SRL system with 3-best parse trees of Berkeley parser (Petrov and Klein, 2007), 1best parse tree of Bikel parser (Bikel, 2004) and Stanford parser (Klein and Manning, 2003). Therefore, at last, we can get 5 SRL result for each sentence. For the training set, we use these SRL results to do rule extraction respectively. We combine the obtained rules together to get a combined rule set. We discard the rules with fewer than 5 appearances. Using this set, we can train our MEPD model directly. As to translation, we match the 5 SRL results with transformation rules respectively, and then apply the resulting target-side-like PASs for decoding. As we mentioned in section 2.3, we use the state-of-the-art BTG system to translate the non-PAS spans. source-side PAS counts number of classes [A0] [Pred(是)] [A1] 245 6 [A0] [Pred(说)] [A1] 148 6 [A0] [AM-ADV] [Pred(是)] [A1] 68 20 [A0] [Pred(表示)] [A1] 66 6 [A0] [Pred(有)] [A1] 42 6 [A0] [Pred(认为)] [A1] 32 4 [A0] [AM-ADV] [Pred(有)] [A1] 32 19 [A0] [Pred(指出)] [A1] 29 4 [AM-ADV] [Pred(有)] [A1] 26 6 [A2] [Pred(为)] [A1] 16 5 Table 1. The top 10 frequent source-side PASs in the dev and test set. 7.2 Ambiguities in Source-side PASs We first give Table 1 to show some examples of role ambiguity. In the table, for instance, the second line denotes that the source-side PAS “[A0] [Pred(说)] [A1]” appears 148 times in the development and test set all together, and it corresponds to 6 different target-side-like PASs in the training set. As we can see from Table 1, all the top 10 PASs correspond to several different target-sidelike PASs. Moreover, according to our statistics, among all PASs appearing in the development set and test set, 56.7% of them carry gap strings. These statistics demonstrate the importance of handling the role ambiguity and gap ambiguity in the PAS-based translation framework. Therefore, we believe that our PAS disambiguation method would be helpful for translation. 7.3 Translation Result We compare the translation result using PASTR, IC-PASTR and our MEPD model in this section. The final translation results are shown in Table 2. As we can see, after employing PAS for translation, all systems outperform the baseline BTG system significantly. This comparison verifies the conclusion of (Zhai et al., 2012) and thus also demonstrates the effectiveness of PAS. MT system Test set n-gram precision 1 2 3 4 BTG 32.75 74.39 41.91 24.75 14.91 PASTR 33.24* 75.28 42.62 25.18 15.10 PASTR+MEPD 33.78* 75.32 43.08 25.75 15.58 IC-PASTR 33.95*# 75.62 43.36 25.92 15.58 IC-PASTR+MEPD 34.19*# 75.66 43.40 26.15 15.92 Table 2. Result of baseline system and the MT systems using our PAS-based disambiguation method. The “*” and “#” denote that the result is significantly better than BTG and PASTR respectively (p<0.01). Specifically, after integrating the inside context information of PAS into transformation, we can see that system IC-PASTR significantly outperforms system PASTR by 0.71 BLEU points. Moreover, after we import the MEPD model into system PASTR, we get a significant improvement over PASTR (by 0.54 BLEU points). These comparisons indicate that both the inside context integration and our MEPD model are beneficial for the decoder to choose better target-side-like PAS for translation. On the basis of IC-PASTR, we further add our MEPD model into translation and get system ICPASTR+MEPD. We can see that this system further achieves a remarkable improvement over system PASTR (0.95 BLEU points). However, from Table 2, we find that system IC-PASTR+MEPD only outperforms system ICPASTR slightly (0.24 BLEU points). The result seems to show that our MEPD model is not such 1132 useful after using IC-PASTR. We will explore the reason in section 7.5. 7.4 Effectiveness of Inside Context Integration The method of inside context integration is used to combine the inside context (gap strings) into PAS for translation, i.e., extend the PASTR to IC-PASTR. In order to demonstrate the effectiveness of inside context integration, we first give Table 3, which illustrates statistics on the matching PASs. The statistics are conducted on the combination of development set and test set. Transformation Rules Matching PAS None Gap PAS Gap PAS Total PASTR 1702 1539 3241 IC-PASTR 1546 832 2378 Table 3. Statistics on the matching PAS. In Table 3, for example, the line for PASTR means that if we use PASTR for the combined set, 3241 PASs (column “Total”) can match PASTRs in total. Among these matching PASs, 1539 ones (column “Gap PAS”) carry gap strings, while 1702 ones do not (column “None Gap PAS”). Consequently, since PASTR does not consider the inside context during translation, the Gap PASs, which account for 47% (1539/3241) of all matching PASs, might be handled inappropriately in the PAS-based translation framework. Therefore, integrating the inside context into PASTRs, i.e., using the proposed IC-PASTRs, would be helpful for translation. The translation result shown in Table 2 also demonstrates this conclusion. (a) reference (c) translation result using IC-PASTR [for economic recovery , especially of investment confidence is] [ A0 ] [ PP ] [Pred] [ A1 ] 这 一 个 好 兆头 是 对 经济 复苏 、 尤其是 恢复 投资 信心 [ a good sign ] [ for economic recovery , especially of investment confidence ] this is 这 一 个 好 兆头 对 经济 复苏 、 尤其是 恢复 投资 信心 是 [a good sign] this (b) translation result using PASTR [ A0 ] [ PP ] [Pred] [ A1 ] 这 一 个 好 兆头 是 对 经济 复苏 、 尤其是 恢复 投资 信心 [a good sign] this is [for economic recovery and the restoration of investors ' confidence] [ A0 ] [ Pred ] [ A1 ] Figure 5. Translation examples to verify the effectiveness of inside context. From Table 3, we can also find that the number of matching PASs decreases after using ICPASTR. This is because IC-PASTR is more specific than PASTR. Therefore, for a PAS with specific inside context (gap strings), even if the matched PASTR is available, the matched ICPASTR might not. This indicates that comparing with PASTR, IC-PASTR is more capable of distinguishing different PASs. Therefore, based on this advantage, although the number of matching PASs decreases, IC-PASTR still improves the translation system using PASTR significantly. Of course, we believe that it is also possible to integrate the inside context without decreasing the number of matching PASs and we plan this as our future work. We further give a translation example in Figure 5 to illustrate the effectiveness of our inside context integration method. In the example, the automatic SRL system ignores the long preposition phrase “对 经济复苏 、尤其是 恢复 投资信 心” for the PAS. Thus, the system using PASTRs can only attach the long phrase to the predicate “是” according to the parse tree, and meanwhile, make use of a transformation rule as follows: [X3] [X2] [A0]1 [Pred]2 [A1]3 [X1] source-side PAS(是) target-side-like PAS In this way, the translation result is very bad, just as Figure 5(b) shows. The long preposition phrases are wrongly positioned in the translation. In contrast, after inside context integration, we match the inside context during PAS transformation. As Figure 5(c) shows, the inside context helps to selects a right transformation rule as follows and gets a good translation result finally. [X1] [X2] [X4] [A0]1 [PP]2 [Pred]3 [A1]4 [X3] source-side PAS(是) target-side-like PAS 7.5 Effectiveness of the MEPD Model The MEPD model incorporates various context features to select better target-side-like PAS for translation. On the basis of PASTR and ICPASTR, we build 160 and 215 different MEPD classifies, respectively, for the frequent sourceside PASs. In Table 2, we have found that our MEPD model improves system IC-PASTR slightly. We conjecture that this phenomenon is due to two possible reasons. On one hand, sometimes, many PAS ambiguities might be resolved by both inside context and the MEPD model. Therefore, the improvement would not be such significant 1133 when we combine these two methods together. On the other hand, as Table 3 shows, the number of matching PASs decreases after using ICPASTR. Since the MEPD model works on PASs, its effectiveness would also weaken to some extent. Future work will explore this phenomenon more thoroughly. PASTR Ref PASTR + MEPD ... , [海牙]A0 [是]Pred [其 最后 一站]A1 。 ... [the hague] [is] [the last leg] . ... , [海牙] [是] [其 最后 一站] 。 ... [the hague] [is] [his last stop] . ... , [海牙]A0 [是]Pred [其 最后 一站]A1 。 ... [is] [his last leg of] [the hague] . Figure 6. Translation examples to demonstrate the effectiveness of our MEPD model. Now, we give Figure 6 to demonstrate the effectiveness of our MEPD model. From the Figure, we can see that the system using PASTRs selects an inappropriate transformation rule for translation: [X1] [X3] [A0]1 [Pred]2 [A1]3 [X2] source-side PAS(是) target-side-like PAS This rule wrongly moves the subject “ 海牙 (Hague)” to the end of the translation. We do not give the translation result of the BTG system here because it makes the same mistake. Conversely, by considering the context information, the PASTR+MEPD system chooses a correct rule for translation: [X3] [X2] [A0]1 [Pred]2 [A1]3 [X1] source-side PAS(是) target-side-like PAS As we can see, the used rule helps to keep the SVO structure unchanged, and gets the correct translation. 8 Conclusion and Future Work In this paper, we focus on the problem of ambiguities for PASs. We first propose two ambiguities: gap ambiguity and role ambiguity. Accordingly, we design two novel methods to do efficient PAS disambiguation: inside-context integration and a novel MEPD model. For inside context integration, we abstract the inside context and combine them into the PASTRs for PAS transformation. Towards the MEPD model, we design a maximum entropy model for each ambitious source-side PASs. The two methods successfully incorporate the rich context information into the translation process. Experiments show that our PAS disambiguation methods help to improve the translation performance significantly. In the next step, we will conduct experiments on other language pairs to demonstrate the effectiveness of our PAS disambiguation method. In addition, we also will try to explore more useful and representative features for our MEPD model. Acknowledgments The research work has been funded by the HiTech Research and Development Program (“863” Program) of China under Grant No. 2011AA01A207, 2012AA011101, and 2012AA011102 and also supported by the Key Project of Knowledge Innovation Program of Chinese Academy of Sciences under Grant No.KGZD-EW-501. We thank the anonymous reviewers for their valuable comments and suggestions. References Wilker Aziz, Miguel Rios, and Lucia Specia. (2011). Shallow semantic trees for smt. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 316–322, Edinburgh, Scotland, July. Daniel Bikel. (2004). Intricacies of Collins parsing model. Computational Linguistics, 30(4):480-511. David Chiang, (2007). Hierarchical phrase-based translation. Computational Linguistics, 33 (2):201– 228. Marine Carpuat and Dekai Wu. 2007a. How phrasesense disambiguation outperforms word sense disambiguation for statistical machine translation. In 11th Conference on Theoretical and Methodological Issues in Machine Translation, pages 43–52. Marine Carpuat and Dekai Wu. 2007b. Improving statistical machine translation using word sense disambiguation. In Proceedings of EMNLP-CoNLL 2007, pages 61–72. Yee Seng Chan, Hwee Tou Ng, and David Chiang. 2007. Word sense disambiguation improves statistical machine translation. In Proc. ACL 2007, pages 33–40. Lei Cui, Dongdong Zhang, Mu Li, Ming Zhou and Tiejun Zhao. A Joint Rule Selection Model for Hierarchical Phrase-Based Translation. In Proc. of ACL 2010. 1134 Jason Eisner. (2003). Learning non-isomorphic tree mappings for machine translation. In Proc. of ACL 2003. Pascale Fung, Wu Zhaojun, Yang Yongsheng, and Dekai Wu. (2006). Automatic learning of chinese english semantic structure mapping. In IEEE/ACL 2006 Workshop on Spoken Language Technology (SLT 2006), Aruba, December. Pascale Fung, Zhaojun Wu, Yongsheng Yang and Dekai Wu. (2007). Learning bilingual semantic frames: shallow semantic sarsing vs. semantic sole projection. In Proceedings of the 11th Conference on Theoretical and Methodological Issues in Machine Translation, pages 75-84. Qin Gao and Stephan Vogel. (2011). Utilizing targetside semantic role labels to assist hierarchical phrase-based machine translation. In Proceedings of Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 107–115, Portland, Oregon, USA, June 2011. Association for Computational Linguistics Zhongjun He, Qun Liu, and Shouxun Lin. 2008. Improving statistical machine translation using lexicalized rule selection. In Proc. of Coling 2008, pages 321–328. Franz Josef Och. (2003). Minimum error rate training in statistical machine translation. In Proc. of ACL 2003, pages 160–167. Franz Josef Och and Hermann Ney. (2004). The alignment template approach to statistical machine translation. Computational Linguistics, 30:417–449. Dan Klein and Christopher D. Manning. (2003). Accurate unlexicalized parsing. In Proc. of ACL-2003, pages 423-430. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. (2003). Statistical phrase-based translation. In Proceedings of NAACL 2003, pages 58–54, Edmonton, Canada, May-June. Philipp Koehn. (2004). Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran and R Zens, (2007). Moses: Open source toolkit for statistical machine translation. In Proc. of ACL 2007. pages 177–180, Prague, Czech Republic, June. Association for Computational Linguistics. Mamoru Komachi and Yuji Matsumoto. (2006). Phrase reordering for statistical machine translation based on predicate-argument structure. In Proceedings of the International Workshop on Spoken Language Translation: Evaluation Campaign on Spoken Language Translation, pages 77–82. Ding Liu and Daniel Gildea. (2008). Improved treeto-string transducer for machine Translation. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 62–69, Columbus, Ohio, USA, June 2008. Ding Liu and Daniel Gildea. (2010). Semantic role features for machine translation. In Proc. of Coling 2010, pages 716–724, Beijing, China, August. Qun Liu, Zhongjun He, Yang Liu, and Shouxun Lin. Maximum Entropy based Rule Selection Model for Syntax-based Statistical Machine Translation. In Proc. of EMNLP 2008. Yang Liu, Qun Liu and Shouxun Lin. (2006). Tree-tostring alignment template for statistical machine translation. In Proc. of ACL-COLING 2006. Daniel Marcu, Wei Wang, Abdessamad Echihabi and Kevin Knight. (2006). SPMT: Statistical machine translation with syntactified target language phrases. In Proc. of EMNLP 2006, pages 44-52. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. (2002). Bleu: a method for automatic evaluation of machine translation. In Proc. ACL 2002, pages 311–318, Philadelphia, Pennsylvania, USA, July. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. (2006). Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433– 440, Sydney, Australia, July. Association for Computational Linguistics. Andreas Stolcke. (2002). Srilm – an extensible language modelling toolkit. In Proceedings of the 7th International Conference on Spoken Language Processing, pages 901–904, Denver, Colorado, USA, September. Dekai Wu and Pascale Fung. (2009a). Can semantic role labelling improve smt. In Proceedings of the 13th Annual Conference of the EAMT, pages 218– 225, Barcelona, May. Dekai Wu and Pascale Fung. (2009b). Semantic roles for smt: A hybrid two-pass model. In Proc. NAACL 2009, pages 13–16, Boulder, Colorado, June. ShuminWu and Martha Palmer. (2011). Semantic mapping using automatic word alignment and semantic role labelling. In Proceedings of Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 21–30, Portland, Oregon, USA, June 2011. Xianchao Wu, Katsuhito Sudoh, Kevin Duh, Hajime Tsukada, and Masaaki Nagata. (2011). Extracting preordering rules from predicate-argument structures. In Proc. IJCNLP 2011, pages 29–37, Chiang Mai, Thailand, November. 1135 Deyi Xiong, Qun Liu, and Shouxun Lin. (2006). Maximum entropy based phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 521–528, Sydney, Australia, July. Deyi Xiong, Min Zhang, and Haizhou Li. (2012). Modelling the translation of predicate-argument structure for smt. In Proc. of ACL 2012, pages 902–911, Jeju, Republic of Korea, 8-14 July 2012. Nianwen Xue. (2008). Labelling chinese predicates with semantic roles. Computational Linguistics, 34(2): 225-255. Feifei Zhai, Jiajun Zhang, Yu Zhou and Chengqing Zong. Machine Translation by Modeling Predicate- Argument Structure Transformation. In Proc. of COLING 2012. Hui Zhang, Min Zhang, Haizhou Li and Eng Siong Chng. (2010). Non-isomorphic Forest Pair Translation. In Proceedings of EMNLP 2010, pages 440450, Massachusetts, USA, 9-11 October 2010. Tao Zhuang, and Chengqing Zong. (2010a). A minimum error weighting combination strategy for chinese semantic role labelling. In Proceedings of COLING-2010, pages 1362-1370. Tao Zhuang and Chengqing Zong. (2010b). Joint inference for bilingual semantic role labelling. In Proceedings of EMNLP 2010, pages 304–314, Massachusetts, USA, 9-11 October 2010. 1136
2013
111
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1137–1147, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Reconstructing an Indo-European Family Tree from Non-native English texts Ryo Nagata1,2 Edward Whittaker3 1Konan University / Kobe, Japan 2LIMSI-CNRS / Orsay, France 3Inferret Limited / Northampton, England [email protected], [email protected] Abstract Mother tongue interference is the phenomenon where linguistic systems of a mother tongue are transferred to another language. Although there has been plenty of work on mother tongue interference, very little is known about how strongly it is transferred to another language and about what relation there is across mother tongues. To address these questions, this paper explores and visualizes mother tongue interference preserved in English texts written by Indo-European language speakers. This paper further explores linguistic features that explain why certain relations are preserved in English writing, and which contribute to related tasks such as native language identification. 1 Introduction Transfer of linguistic systems of a mother tongue to another language, namely mother tongue interference, is often observable in the writing of nonnative speakers. The reader may be able to determine the mother tongue of the writer of the following sentence from the underlined article error: The alien wouldn’t use my spaceship but the hers. The answer would probably be French or Spanish; the definite article is allowed to modify possessive pronouns in these languages, and the usage is sometimes negatively transferred to English writing. Researchers such as Swan and Smith (2001), Aarts and Granger (1998), DavidsenNielsen and Harder (2001), and Altenberg and Tapper (1998) work on mother tongue interference to reveal overused/underused words, part of speech (POS), or grammatical items. In contrast, very little is known about how strongly mother tongue interference is transferred to another language and about what relation there is across mother tongues. At one extreme, one could argue that it is so strongly transferred to texts in another language that the linguistic relations between mother tongues are perfectly preserved in the texts. At the other extreme, one can counter it, arguing that other features such as non-nativeness are more influential than mother tongue interference. One possible reason for this is that a large part of the distinctive language systems of a mother tongue may be eliminated when transferred to another language from a speaker’s mother tongue. For example, Slavic languages have a rich inflectional case system (e.g., Czech has seven inflectional cases) whereas French does not. However, the difference in the richness cannot be transferred into English because English has almost no inflectional case system. Thus, one cannot determine the mother tongue of a given nonnative text from the inflectional case. A similar argument can be made about some parts of gender, tense, and aspect systems. Besides, Wong and Dras (2009) show that there are no significant differences, between mother tongues, in the misuse of certain syntactic features such as subject-verb agreement that have different tendencies depending on their mother tongues. Considering these, one could not be so sure which argument is correct. In any case, to the best of our knowledge, no one has yet answered this question. In view of this background, we take the first step in addressing this question. We hypothesize that: Hypothesis: Mother tongue interference is so strong that the relations in a language family are preserved in texts written in another language. In other words, mother tongue interference is so strong that one can reconstruct a language fam1137 ily tree from non-native texts. One of the major contributions of this work is to reveal and visualize a language family tree preserved in non-native texts, by examining the hypothesis. This becomes important in native language identification1 which is useful for improving grammatical error correction systems (Chodorow et al., 2010) or for providing more targeted feedback to language learners. As we will see in Sect. 6, this paper reveals several crucial findings that contribute to improving native language identification. In addition, this paper shows that the findings could contribute to reconstruction of language family trees (Enright and Kondrak, 2011; Gray and Atkinson, 2003; Barbanc¸on et al., 2007; Batagelj et al., 1992; Nakhleh et al., 2005), which is one of the central tasks in historical linguistics. The rest of this paper is structured as follows. Sect. 2 introduces the basic approach of this work. Sect. 3 discusses the methods in detail. Sect. 4 describes experiments conducted to investigate the hypothesis. Sect. 5 discusses the experimental results. Sect. 6 discusses implications for work in related domains. 2 Approach To examine the hypothesis, we reconstruct a language family tree from English texts written by non-native speakers of English whose mother tongue is one of the Indo-European languages (Beekes, 2011; Ramat and Ramat, 2006). If the reconstructed tree is sufficiently similar to the original Indo-European family tree, it will support the hypothesis. If not, it suggests that some features other than mother tongue interference are more influential. The approach we use for reconstructing a language family tree is to apply agglomerative hierarchical clustering (Han and Kamber, 2006) to English texts written by non-native speakers. Researchers have already performed related work on reconstructing language family trees. For instance, Kroeber and Chri´etien (1937) and Elleg˚ard (1959) proposed statistical methods for measuring the similarity metric between languages. More recently, Batagelj et al. (1992) and Kita (1999) proposed methods for reconstructing language family trees using clustering. Among them, the 1Recently, native language identification has drawn the attention of NLP researchers. For instance, a shared task on native language identification took place at an NAACL-HLT 2013 workshop. most related method is that of Kita (1999). In his method, a variety of languages are modeled by their spelling systems (i.e., character-based n-gram language models). Then, agglomerative hierarchical clustering is applied to the language models to reconstruct a language family tree. The similarity used for clustering is based on a divergence-like distance between two language models that was originally proposed by Juang and Rabiner (1985). This method is purely data-driven and does not require human expert knowledge for the selection of linguistic features. Our work closely follows Kita’s work. However, it should be emphasized that there is a significant difference between the two. Kita’s work (and other previous work) targets clustering of a variety of languages whereas our work tries to reconstruct a language family tree preserved in non-native English. This significant difference prevents us from directly applying techniques in the literature to our task. For instance, Batagelj et al. (1992) use basic vocabularies such as belly in English and ventre in French to measure similarity between languages. Obviously, this does not work on our task; belly is belly in English writing whoever writes it. Kita’s method is also likely not to work well because all texts in our task share the same spelling system (i.e., English spelling). Although spelling is sometimes influenced by mother tongues, it involves a lot more including overuse, underuse, and misuse of lexical, grammatical, and syntactic systems. To solve the problem, this work adopts a wordbased language model in the expectation that word sequences reflect mother tongue interference. At the same time, its simple application would cause a serious side effect. It would reflect the topics of given texts rather than mother tongue interference. Unfortunately, there exists no such English corpus that covers a variety of language speakers with uniform topics; moreover the availability of non-native corpora is still somewhat limited. This also means that available non-native corpora may be too small to train reliable word-based language models. The next section describes two methods (language model-based and vector-based), which address these problems. 3 Methods 3.1 Language Model-based Method To begin with, let us define the following symbols used in the methods. Let Di be a set of English 1138 texts where i denotes a mother tongue i. Similarly, let Mi be a language model trained using Di. To solve the problems pointed out in Sect. 2, we use an n-gram language model based on a mixture of word and POS tokens instead of a simple wordbased language model. In this language model, content words in n-grams are replaced with their corresponding POS tags. This greatly decreases the influence of the topics of texts, as desired. It also decreases the number of parameters in the language model. To build the language model, the following three preprocessing steps are applied to Di. First, texts in Di are split into sentences. Second, each sentence is tokenized, POS-tagged, and mapped entirely to lowercase. For instance, the first example sentence in Sect. 1 would give: the/DT alien/NN would/MD not/RB use/VB my/PRP$ spaceship/NN but/CC the/DT hers/PRP ./. Finally, words are replaced with their corresponding POS tags; for the following words, word tokens are used as their corresponding POS tags: coordinating conjunctions, determiners, prepositions, modals, predeterminers, possessives, pronouns, question adverbs. Also, proper nouns are treated as common nouns. At this point, the special POS tags BOS and EOS are added at the beginning and end of each sentence, respectively. For instance, the above example would result in the following word/POS sequence: BOS the NN would RB VB my NN but the hers . EOS Note that the content of the original sentence is far from clear while reflecting mother tongue interference, especially in the hers. Now, the language model Mi can be built from Di. We set n = 3 (i.e., trigram language model) following Kita’s work and use Kneser-Ney (KN) smoothing (Kneser and Ney, 1995) to estimate its conditional probabilities. With Mi and Di, we can naturally apply Kita’s method to our task. The clustering algorithm used is agglomerative hierarchical clustering with the average linkage method. The distance2 between two language models is measured as follows. The 2It is not a distance in a mathematical sense. However, we will use the term distance following the convention in the literature. probability that Mi generates Di is calculated by Pr(Di|Mi). Note that Pr(Di|Mi) ≈ Pr(w1,i) Pr(w2,i|w1,i) × |Di| ∏ t=3 Pr(wt,i|wt−2,i, wt−1,i) (1) where wt,i and |Di| denote the tth token in Di and the number of tokens in Di, respectively, since we use the trigram language model. Then, the distance from Mi to Mj is defined by d(Mi →Mj) = 1 |Dj| log Pr(Dj|Mj) Pr(Dj|Mi) . (2) In other words, the distance is determined based on the ratio of the probabilities that each language model generates the language data. Because d(Mi →Mj) and d(Mj →Mi) are not symmetrical, we define the distance between Mi and Mj to be their average: d(Mi, Mj)= d(Mi →Mj)+d(Mj →Mi) 2 . (3) Equation (3) is used to calculate the distance between two language models for clustering. To sum up, the procedure of the language family tree construction method is as follows: (i) Preprocess each Di; (ii) Build Mi from Di; (iii) Calculate the distances between the language models; (iv) Cluster the language data using the distances; (v) Output the result as a language family tree. 3.2 Vector-based Method We also examine a vector-based method for language family tree reconstruction. As we will see in Sect. 5, this method allows us to interpret clustering results more easily than with the language model-based method while both result in similar language family trees. In this method, Di is modeled by a vector. The vector is constructed based on the relative frequencies of trigrams. As a consequence, the distance is naturally defined by the Euclidean distance between two vectors. The clustering procedure is the same as for the language model-based method except that Mi is vector-based and that the distance metric is Euclidean. 1139 4 Experiments We selected the ICLE corpus v.2 (Granger et al., 2009) as the target language data. It consists of English essays written by a wide variety of nonnative speakers of English. Among them, the 11 shown in Table 1 are of Indo-European languages. Accordingly, we selected the subcorpora of the 11 languages in the experiments. Before the experiments, we preprocessed the corpus data to control the experimental conditions. Because some of the writers had more than one native language, we excluded essays that did not meet the following three conditions: (i) the writer has only one native language; (ii) the writer has only one language at home; (iii) the two languages in (i) and (ii) are the same as the native language of the subcorpus to which the essay belongs3. After the selection, markup tags such as essay IDs were removed from the corpus data. Also, the symbols ‘ and ’ were unified into ’4. For reference, we also used native English (British and American university students’ essays in the LOCNESS corpus5) and two sets of Japanese English (ICLE and the NICE corpus (Sugiura et al., 2007)). Table 1 shows the statistics on the corpus data. Performance of POS tagging is an important factor in our methods because they are based on word/POS sequences. Existing POS taggers might not perform well on non-native English texts because they are normally developed to analyze native English texts. Considering this, we tested CRFTagger6 on non-native English texts containing various grammatical errors before the experiments (Nagata et al., 2011). It turned out that CRFTagger achieved an accuracy of 0.932 (compared to 0.970 on native texts). Although it did not perform as well as on native texts, it still achieved a fair accuracy. Accordingly, we decided to use it in our experiments. Then, we generated cluster trees from the corpus data using the methods described in Sect. 3. 3For example, because of (iii), essays written by native speakers of Swedish in the Finnish subcorpus were excluded from the experiments. This is because they were collected in Finland and might be influenced by Finnish. 4The symbol ‘ is sometimes used for ’ (e.g., I‘m). 5The LOCNESS corpus is a corpus of native English essays made up of British pupils’ essays, British university students’ essays, and American university students’ essays: https://www.uclouvain.be/ en-cecl-locness.html 6Xuan-Hieu Phan, “CRFTagger: CRF English POS Tagger,” http://crftagger.sourceforge.net/, 2006. Native language # of essays # of tokens Bulgarian 294 219,551 Czech 220 205,264 Dutch 244 240,861 French 273 202,439 German 395 236,841 Italian 346 219,581 Norwegian 290 218,056 Polish 354 251,074 Russian 255 236,748 Spanish 237 211,343 Swedish 301 268,361 English 298 294,357 Japanese1 (ICLE) 171 224,534 Japanese2 (NICE) 340 130,156 Total 4,018 3,159,166 Table 1: Statistics on target corpora. We used the Kyoto Language Modeling toolkit7 to build language models from the corpus data. We removed n-grams that appeared less than five times8 in each subcorpus in the language models. Similarly, we implemented the vector-based method with trigrams using the same frequency cutoff (but without smoothing). Fig. 1 shows the experimental results. The tree at the top is the Indo-European family tree drawn based on the figure shown in Crystal (1997). It shows that the 11 languages are divided into three groups: Italic, Germanic, and Slavic branches. The second and third trees are the cluster trees generated by the language model-based and vector-based methods, respectively. The number at each branching node denotes in which step the two clusters were merged. The experimental results strongly support the hypothesis we made in Sect. 1. Fig. 1 reveals that the language model-based method correctly groups the 11 Englishes into the Italic, Germanic, and Slavic branches. It first merges Norwegian-English and Swedish-English into a cluster. The two languages belong to the North Germanic branch of the Germanic branch and thus are closely related. Subsequently, the language model-based method correctly merges the other languages into the three branches. A dif7The Kyoto Language Modeling toolkit: http://www. phontron.com/kylm/ 8We found that the results were not sensitive to the value of frequency cutoff so long as we set it to a small number. 1140 Polish Italic Germanic Slavic 1 3 6 7 8 9 10 Bulgarian Swedish French Spanish Norwegian Czech Italian Russian Dutch German French English Spanish English Italian English Swedish English Norwegian English Dutch English German English Polish English Bulgarian English Czech English Russian English 2 4 5 Indo-European family tree Cluster tree generated by LM-based method 1 3 4 7 6 8 10 French English Spanish English Italian English Swedish English Norwegian English Dutch English German English Polish English Bulgarian English Czech English Russian English 2 5 9 Cluster tree generated by vector-based clustering Figure 1: Experimental results. ference between its cluster tree and the IndoEuropean family tree is that there are some mismatches within the Germanic and Slavic branches. While the difference exists, the method strongly distinguishes the three branches from one another. The third tree shows that the vector-based method behaves similarly while it mistakenly attaches Polish-English into an independent branch. From these results, we can say that mother tongue interference is transferred into the 11 Englishes, strongly enough for reconstructing its language family tree, which we propose calling the interlanguage Indo-European family tree in English. Fig. 2 shows the experimental results with native and Japanese Englishes. It shows that the same interlanguage Indo-European family tree was reconstructed as before. More interestingly, native English was detached from the interlanguage Indo-European family tree contrary to the expectation that it would be attached to the Germanic branch because English is of course a member of the Germanic branch. This implies that non-nativeness common to the 11 Englishes is more influential than the intrafamily distance is9; 9Admittedly, we need further investigation to confirm this argument especially because we applied CRFTagger, which is developed to analyze native English, to both non-native and native Englishes, which might affect the results. Interlanguage Indo-European family tree Other family Japanese English1 Japanese English2 3 Native English 12 13 ACL 2013 Figure 2: Experimental results with native and Japanese Englishes. otherwise, native English would be included in the German branch. Fig. 2 also shows that the two sets of Japanese English were merged into a cluster and that it was the most distant in the whole tree. This shows that the interfamily distance is the most influential factor. Based on these results, we can further hypothesize as follows: interfamily distance > non-nativeness > intrafamily distance. 5 Discussion To get a better understanding of the interlanguage Indo-European family tree, we further explore linguistic features that explain well the above phenomena. When we analyze the experimental results, however, some problems arise. It is almost impossible to find someone who has a good knowledge of the 11 languages and their mother language interference in English writing. Besides, there are a large number of language pairs to compare. Thus, we need an efficient and effective way to analyze the experimental results. To address these problems, we did the following. First, we focused on only a few Englishes out of the 11. Because one of the authors had some knowledge of French, we selected FrenchEnglish as the main target. This naturally made us select the other Italic Englishes as its counterparts. Also, because we had access to a native speaker of Russian who had a good knowledge of English, we included Russian-English in our focus. We analyzed these Englishes and then examined whether the findings obtained apply to the other Englishes or not. Second, we used a method for extracting interesting trigrams from the corpus data. The method compares three out of the 11 corpora (for example, French-, Spanish-, and Russian-Englishes). If we remove instances of a trigram from each set, the clustering tree involving 1141 the three may change. For example, the removal of but the hers may result in a cluster tree merging French- and Russian-Englishes before Frenchand Spanish-Englishes. Even if it does not change, the distances may change in that direction. We analyzed what trigrams had contributed to the clustering results with this approach. To formalize this approach, we will denote a trigram by t. We will also denote its relative frequency in the language data Di by rti. Then, the change in the distances caused by the removal of t from Di, Dj, and Dk is quantified by s = (rtk −rti)2 −(rtj −rti)2 (4) in the vector-based method. The quantity (rtk − rti)2 is directly related to the decrease in the distance between Di and Dk and similarly, (rtj − rti)2 to that between Di and Dj in the vectorbased method. Thus, the greater s is, the higher the chance that the cluster tree changes. Therefore, we can obtain a list of interesting trigrams by sorting them according to s. We could do a similar calculation in the language model-based method using the conditional probabilities. However, it requires a more complicated calculation. Accordingly, we limit ourselves to the vector-based method in this analysis, noting that both methods generated similar cluster trees. Table 2 shows the top 15 interesting trigrams where Di, Dj, and Dk are French-, Spanish-, and Russian-Englishes, respectively. Note that s is multiplied by 106 and r is in % for readability. The list reveals that many of the trigrams contain the article a or the. Interestingly, their frequencies are similar in French-English and Spanish-English, and both are higher than in Russian-English. This corresponds to the fact that French and Spanish have articles whereas Russian does not. Actually, the same argument can be made about the other Italic and Slavic Englishes (e.g., the JJ NN: Italian-English 0.82; Polish-English 0.72)10. An exception is that of trigrams containing the definite article in Bulgarian-English; it tends to be higher in Bulgarian-English than in the other Slavic Englishes. Surprisingly and interestingly, however, it reflects the fact that Bulgarian does have the definite article but not the indefinite article (e.g., the JJ NN: 0.82; a JJ NN: 0.60 in Bulgarian-English). 10Due to the space limitation, other lists were not included in this paper but are available at http://web.hyogo-u. ac.jp/nagata/acl/. Table 3 shows that the differences in article use exist even between the Italic and Germanic branches despite the fact that both have the indefinite and definite articles. The list still contains a number of trigrams containing articles. For a better understanding of this, we looked further into the distribution of articles in the corpus data. It turns out that the distribution almost perfectly groups the 11 Englishes into the corresponding branches as shown in Fig. 3. The overall use of articles is less frequent in the Slavic-Englishes. The definite article is used more frequently in the Italic-Englishes than in the Germanic Englishes (except for Dutch-English). We speculate that this is perhaps because the Italic languages have a wider usage of the definite article such as its modification of possessive pronouns and proper nouns. The Japanese Englishes form another group (this is also true for the following findings). This corresponds to the fact that the Japanese language does not have an article system similar to that of English. s Trigram t rti rtj rtk 5.14 the NN of 1.01 0.98 0.78 4.38 a JJ NN 0.85 0.77 0.62 2.74 the JJ NN 0.87 0.86 0.71 2.30 NN of the 0.49 0.52 0.33 1.64 . . . 0.22 0.12 0.05 1.56 NNS . EOS 0.77 0.70 0.92 1.31 NNS and NNS 0.09 0.13 0.21 1.25 BOS RB , 0.25 0.22 0.14 1.22 of the NN 0.42 0.44 0.30 1.17 VBZ to VB 0.26 0.22 0.14 1.09 BOS i VBP 0.07 0.05 0.17 1.03 NN of NN 0.74 0.70 0.63 0.88 NN of JJ 0.15 0.15 0.25 0.67 the JJ NNS 0.28 0.28 0.20 0.65 NN to VB 0.40 0.38 0.31 Table 2: Interesting trigrams (French- (Di), Spanish- (Dj), and Russian- (Dk) Englishes). Another interesting trigram, though not as obvious as article use, is NN of NN, which ranks 12th and 2nd in Table 2 and 3, respectively. In the Italic Englishes, the trigram is more frequent than the other non-native Englishes as shown in Fig. 4. This corresponds to the fact that noun-noun compounds are less common in the Italic languages than in English and that instead, the of-phrase (NN of NN) is preferred (Swan and Smith, 2001). For 1142 s Trigram t rti rtj rtk 21.49 the NN of 1.01 0.98 0.54 5.70 NN of NN 0.74 0.70 0.50 3.26 NN of the 0.49 0.52 0.30 3.10 the JJ NN 0.87 0.86 0.70 2.62 . . . 0.22 0.12 0.03 1.53 of the NN 0.42 0.44 0.29 1.50 NN , NN 0.30 0.30 0.18 1.50 BOS i VBP 0.07 0.05 0.19 0.85 NNS and NNS 0.09 0.13 0.19 0.81 JJ NN of 0.40 0.39 0.31 0.68 . . EOS 0.13 0.06 0.02 0.63 a JJ NN 0.85 0.77 0.73 0.63 RB . EOS 0.21 0.16 0.31 0.56 NN , the 0.16 0.16 0.08 0.50 NN of a 0.17 0.09 0.06 Table 3: Interesting trigrams (French- (Di), Spanish- (Dj), and Swedish- (Dk) Englishes). instance, orange juice is expressed as juice of orange in the Italic languages (e.g., jus d’orange in French). In contrast, noun-noun compounds or similar constructions are more common in Russian and Swedish. As a result, NN of NN becomes relatively frequent in the Italic Englishes. Fig. 4 also shows that its distribution roughly groups the 11 Englishes into the three branches. Therefore, the way noun phrases (NPs) are constructed is a clue to how the three branches were clustered. This finding in turn reveals that the consecutive repetitions of nouns occur less in the Italic Englishes. In other words, the length tends to be shorter than in the others where we define the length as the number of consecutive repetitions of common nouns (for example, the length of orange juice is one because a noun is consecutively repeated once). To see if this is true, we calculated the average length for each English. Fig. 5 shows that the average length roughly distinguishes the Italic Englishes from the other nonnative Englishes; French-English is the shortest, which is explained by the discussion above, while Dutch- and German-Englishes are longest, which may correspond to the fact that they have a preference for noun-noun compounds as Snyder (1996) argues. For instance, German allows the concatenated form as in Orangensaft (equivalently orangejuice). This tendency in the length of nounnoun compounds provides us with a crucial insight for native language identification, which we will 2 3 4 5 6 1 1.5 2 2.5 3 Relative frequency of definite article (%) Relative frequency of indefinite article (%) Bulgarian Czech Dutch French German Italian Norwegian Polish Russian Spanish Swedish English Japanese1 Japanese2 Italic Germanic Slavic Japanese Figure 3: Distribution of articles. 0 0.5 1 Relative frequency of NN of NN (%) French Italian Spanish Italic Polish Russian Bulgarian Czech Slavic English Dutch Swedish German Norwegian Germanic Japanese1 Japanese2 Japanese Figure 4: Relative frequency of NN of NN in each corpus (%). come back to in Sect. 6. The trigrams BOS RB , in Table 2 and RB . EOS in Table 3 imply that there might also be a certain pattern in adverb position in the 11 Englishes (they roughly correspond to adverbs at the beginning and end of sentences). Fig. 6 shows an insight into this. The horizontal and vertical axes correspond to the ratio of adverbs at the beginning and the end of sentences, respectively. It turns out that the German Englishes form a group. So do the Italic Englishes although it is less dense. In contrast, the Slavic Englishes are scattered. However, the ratios give a clue to how to distinguish Slavic Englishes from the others when combined with other 1143 0 0.1 Average length of noun-noun compounds French Italian Spanish Italic Bulgarian Czech Russian Polish Slavic Swedish Norwegian German Dutch English Germanic Japanese1 Japanese2 Japanese Figure 5: Average length of noun-noun compounds in each corpus. 5 10 15 20 25 30 Ratio of adverbs at the end (%) Ratio of adverbs at the beginning (%) Bulgarian Czech Polish Russian Dutch German Norwegian Swedish French Italian Spanish English Japanese1 Japanese2 Italic Germanic Slavic Japanese Figure 6: Distribution of adverb position. trigrams. For instance, although Polish-English is located in the middle of Swedish-English and Bulgarian-English in the distribution of articles (in Fig. 3), the ratios tell us that Polish-English is much nearer to Bulgarian-English. 6 Implications for Work in Related Domains Researchers including Wong and Dras (2009), Wong et al. (2011; 2012), and Koppel et al. (2005) work on native language identification and show that machine learning-based methods are effective. Wong and Dras (2009) propose using information about grammatical errors such as errors in determiners to achieve better performance while they show that its use does not improve the performance, contrary to the expectation. Related to this, other researchers (Koppel and Ordan, 2011; van Halteren, 2008) show that machine learning-based methods can also predict the source language of a given translated text although it should be emphasized that it is a different task from native language identification because translation is not typically performed by non-native speakers but rather native speakers of the target language11. The experimental results show that n-grams containing articles are predictive for identifying native languages. This indicates that they should be used in the native language identification task. Importantly, all n-grams containing articles should be used in the classifier unlike the previous methods that are based only on ngrams containing article errors. Besides, no articles should be explicitly coded in n-grams for taking the overuse/underuse of articles into consideration. We can achieve this by adding a special symbol such as φ to the beginning of each NP whose head noun is a common noun and that has no determiner in it as in “I like φ orange juice.” In addition, the length of noun-noun compounds and the position of adverbs should also be considered in native language identification. In particular, the former can be modeled by the Poisson distribution as follows. The Poisson distribution gives the probability of the number of events occurring in a fixed time. In our case, the number of events in a fixed time corresponds to the number of consecutive repetitions of common nouns in NPs, which in turn corresponds to the length. To be precise, the probability of a noun-noun compound with length l is given by Pr(l) = λl l! e−λ, (5) where λ corresponds to the average length. Fig. 7 shows that the observed values in the FrenchEnglish data very closely fit the theoretical proba11For comparison, we conducted a pilot study where we reconstructed a language family tree from English texts in European Parliament Proceedings Parallel Corpus (Europarl) (Koehn, 2011). It turned out that the reconstructed tree was different from the canonical tree (available at http: //web.hyogo-u.ac.jp/nagata/acl/). However, we need further investigation to confirm it because each subcorpus in Europarl is variable in many dimensions including its size and style (e.g., overuse of certain phrases such as ladies and gentlemen). 1144 0 0.5 1 0 1 2 3 Probability Length of noun-noun compound Theoretical Observed Figure 7: Distribution of noun-noun compound length for French-English. bilities given by Equation (5)12. This holds for the other Englishes although we cannot show them because of the space limitation. Consequently, Equation (5) should be useful in native language identification. Fortunately, it can be naturally integrated into existing classifiers. In the domain of historical linguistics, researchers have used computational and corpusbased methods for reconstructing language family trees. Some (Enright and Kondrak, 2011; Gray and Atkinson, 2003; Barbanc¸on et al., 2007; Batagelj et al., 1992; Nakhleh et al., 2005) apply clustering techniques to the task of language family tree reconstruction. Others (Kita, 1999; Rama and Singh, 2009) use corpus statistics for the same purpose. These methods reconstruct language family trees based on linguistic features that exist within words including lexical, phonological, and morphological features. The experimental results in this paper suggest the possibility of the use of non-native texts for reconstructing language family trees. It allows us to use linguistic features that exist between words, as seen in our methods, which has been difficult with previous methods. Language involves the features between words such as phrase construction and syntax as well as the features within words and thus they should both be considered in reconstruc12The theoretical and observed values are so close that it is difficult to distinguish between the two lines in Fig. 7. For example, Pr(l = 1) = 0.0303 while the corresponding observed value is 0.0299. tion of language family trees. 7 Conclusions In this paper, we have shown that mother tongue interference is so strong that the relations between members of the Indo-European language family are preserved in English texts written by Indo-European language speakers. To show this, we have used clustering to reconstruct a language family tree from 11 sets of non-native English texts. It turned out that the reconstructed tree correctly groups them into the Italic, Germanic, and Slavic branches of the IndoEuropean family tree. Based on the resulting trees, we have then hypothesized that the following relation holds in mother tongue interference: interfamily distance > non-nativeness > intrafamily distance. We have further explored several intriguing linguistic features that play an important role in mother tongue interference: (i) article use, (ii) NP construction, and (iii) adverb position, which provide several insights for improving the tasks of native language identification and language family tree reconstruction. Acknowledgments This work was partly supported by the Digiteo foreign guest project. We would like to thank the three anonymous reviewers and the following persons for their useful comments on this paper: Kotaro Funakoshi, Mitsuaki Hayase, Atsuo Kawai, Robert Ladig, Graham Neubig, Vera Sheinman, Hiroya Takamura, David Valmorin, Mikko Vilenius. References Jan Aarts and Sylviane Granger, 1998. Tag sequences in learner corpora: a key to interlanguage grammar and discourse, pages 132–141. Longman, New York. Bengt Altenberg and Marie Tapper, 1998. The use of adverbial connectors in advanced Swedish learners’ written English, pages 80–93. Longman, New York. Franc¸ois Barbanc¸on, Tandy Warnow, Steven N. Evans, Donald Ringe, and Luay Nakhleh. 2007. An experimental study comparing linguistic phylogenetic reconstruction methods. Statistics Technical Reports, page 732. Vladimir Batagelj, Tomaˇz Pisanski, and Damijana Kerˇziˇc. 1992. Automatic clustering of languages. Computational Linguistics, 18(3):339–352. 1145 Robert S.P. Beekes. 2011. Comparative IndoEuropean Linguistics: An Introduction (2nd ed.). John Benjamins Publishing Company, Amsterdam. Martin Chodorow, Michael Gamon, and Joel R. Tetreault. 2010. The utility of article and preposition error correction systems for English language learners: feedback and assessment. Language Testing, 27(3):419–436. David Crystal. 1997. The Cambridge Encyclopedia of Language (2nd ed.). Cambridge University Press, Cambridge. Niels Davidsen-Nielsen and Peter Harder, 2001. Speakers of Scandinavian languages: Danish, Norwegian, Swedish, pages 21–36. Cambridge University Press, Cambridge. Alvar Elleg˚ard. 1959. Statistical measurement of linguistic relationship. Language, 35(2):131–156. Jessica Enright and Grzegorz Kondrak. 2011. The application of chordal graphs to inferring phylogenetic trees of languages. In Proc. of 5th International Joint Conference on Natural Language Processing, pages 8–13. Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. International Corpus of Learner English v2. Presses universitaires de Louvain, Louvain. Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the Anatolian theory of Indo-European origin. Nature, 426:435–438. Jiawei Han and Micheline Kamber. 2006. Data Mining: Concepts and Techniques (2nd Ed.). Morgan Kaufmann Publishers, San Francisco. Bing-Hwang Juang and Lawrence R. Rabiner. 1985. A probabilistic distance measure for hidden Markov models. AT&T Technical Journal, 64(2):391–408. Kenji Kita. 1999. Automatic clustering of languages based on probabilistic models. Journal of Quantitative Linguistics, 6(2):167–171. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proc. of International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 181–184. Philipp Koehn. 2011. Europarl: A parallel corpus for statistical machine translation. In Proc. of 10th Machine Translation Summit, pages 79–86. Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proc. of 49th Annual Meeting of the Association for Computational Linguistics, pages 1318–1326. Moshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005. Determining an author’s native language by mining a text for errors. In Proc. of 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, pages 624–628. Alfred L. Kroeber and Charles D. Chri´etien. 1937. Quantitative classification of Indo-European languages. Language, 13(2):83–103. Ryo Nagata, Edward Whittaker, and Vera Sheinman. 2011. Creating a manually error-tagged and shallow-parsed learner corpus. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1210–1219. Luay Nakhleh, Tandy Warnow, Don Ringe, and Steven N. Evans. 2005. A comparison of phylogenetic reconstruction methods on an Indo-European dataset. Transactions of the Philological Society, 103(2):171–192. Taraka Rama and Anil Kumar Singh. 2009. From bag of languages to family trees from noisy corpus. In Proc. of Recent Advances in Natural Language Processing, pages 355–359. Anna Giacalone Ramat and Paolo Ramat, 2006. The Indo-European Languages. Routledge, New York. William Snyder. 1996. The acquisitional role of the syntax-morphology interface: Morphological compounds and syntactic complex predicates. In Proc. of Annual Boston University Conference on Language Development, volume 2, pages 728–735. Masatoshi Sugiura, Masumi Narita, Tomomi Ishida, Tatsuya Sakaue, Remi Murao, and Kyoko Muraki. 2007. A discriminant analysis of non-native speakers and native speakers of English. In Proc. of Corpus Linguistics Conference CL2007, pages 84–89. Michael Swan and Bernard Smith. 2001. Learner English (2nd Ed.). Cambridge University Press, Cambridge. Hans van Halteren. 2008. Source language markers in EUROPARL translations. In Proc. of 22nd International Conference on Computational Linguistics, pages 937–944. Sze-Meng J. Wong and Mark Dras. 2009. Contrastive analysis and native language identification. In Proc. Australasian Language Technology Workshop, pages 53–61. Sze-Meng J. Wong, Mark Dras, and Mark Johnson. 2011. Exploiting parse structures for native language identification. In Proc. Conference on Empirical Methods in Natural Language Processing, pages 1600–1611. Sze-Meng J. Wong, Mark Dras, and Mark Johnson. 2012. Exploring adaptor grammars for native language identification. In Proc. Joint Conference on 1146 Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 699–709. 1147
2013
112
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1148–1158, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Word Association Profiles and their Use for Automated Scoring of Essays Beata Beigman Klebanov and Michael Flor Educational Testing Service 660 Rosedale Road Princeton, NJ 08541 {bbeigmanklebanov,mflor}@ets.org Abstract We describe a new representation of the content vocabulary of a text we call word association profile that captures the proportions of highly associated, mildly associated, unassociated, and dis-associated pairs of words that co-exist in the given text. We illustrate the shape of the distirbution and observe variation with genre and target audience. We present a study of the relationship between quality of writing and word association profiles. For a set of essays written by college graduates on a number of general topics, we show that the higher scoring essays tend to have higher percentages of both highly associated and dis-associated pairs, and lower percentages of mildly associated pairs of words. Finally, we use word association profiles to improve a system for automated scoring of essays. 1 Introduction The vast majority of contemporary research that investigates statistical properties of language deals with characterizing words by extracting information about their behavior from large corpora. Thus, co-occurrence of words in n-word windows, syntactic structures, sentences, paragraphs, and even whole documents is captured in vector-space models built from text corpora (Turney and Pantel, 2010; Basili and Pennacchiotti, 2010; Erk and Pad´o, 2008; Mitchell and Lapata, 2008; Bullinaria and Levy, 2007; Jones and Mewhort, 2007; Pado and Lapata, 2007; Lin, 1998; Landauer and Dumais, 1997; Lund and Burgess, 1996; Salton et al., 1975). However, little is known about typical profiles of texts in terms of co-occurrence behavior of their words. Some information can be inferred from the success of statistical techniques in predicting certain structures in text. For example, the fact that a text segmentation algorithm that uses information about patterns of word co-occurrences can detect sub-topic shifts in a text (Riedl and Biemann, 2012; Misra et al., 2009; Eisenstein and Barzilay, 2008) tells us that texts contain some proportion of more highly associated word pairs (those in subsequent sentences within the same topical unit) and of less highly associated pairs (those in sentences from different topical units).1 Yet, does each text have a different distribution of highly associated, mildly associated, unassociated, and dis-associated pairs of words, or do texts tend to strike a similar balance of these? What are the proportions of the different levels of association, how much variation there exists, and are there systematic differences between various kinds of texts? We present research that makes a first step in addressing these questions. From the applied perspective, our interest is in quantifying differences between well-written and poorly written essays, for the purposes of automated scoring of essays. We therefore concentrate on essay data for the main experiments reported in this paper, although some additional corpora will be used for illustration purposes. The paper is organized as follows. Section 2 presents our methodology for building word association profiles for texts. Section 3 illustrates the profiles for three corpora from different genres. Section 4.2 presents our study of the relationship between writing quality and patterns of word associations, with section 4.5 showing the results of adding a feature based on word association profile to a state-of-art essay scoring system. Related work is reviewed is section 5. 1Note that the classical approach to topical segmentation of texts, TextTiling (Hearst, 1997), uses only word repetitions. The cited approaches use topic models that are in turn estimated using word co-occurrence. 1148 2 Methodology In order to describe the word association profile of a text, three decisions need to be made. The first decision is how to quantify the extent of cooccurrence between two words; we will use pointwise mutual information (PMI) estimated from a large and diverse corpus of texts. The second is which pairs of words in a text to consider when building a profile for the text; we opted for all pairs of content word types occurring in a text, irrespective of the distance between them. We consider word types, not tokens; no lemmatization is performed. The third decision is how to represent the co-occurrence profiles; we use a histogram where each bin represents the proportion of word pairs in the given interval of PMI values. The rest of the section gives more detail about these decisions. To obtain comprehensive information about typical co-occurrence behavior of words of English, we build a first-order co-occurrence word-space model (Turney and Pantel, 2010; Baroni and Lenci, 2010). The model was generated from a corpus of texts of about 2.5 billion words, counting co-occurrence in a paragraph,2 using no distance coefficients (Bullinaria and Levy, 2007). About 2 billion words come from the Gigaword 2003 corpus (Graff and Cieri, 2003). Additional 500 million words come from an in-house corpus containing popular science and fiction texts. Occurrence counts of 2.1 million word types and of 1,279 million word type pairs are efficiently compressed using the TrendStream technology (Flor, 2013), resulting in a database file of 4.7GB. TrendStream is a trie-based architecture for storage, retrieval, and updating of very large word n-gram datasets. We store pairwise word associations as bigrams; since associations are unordered, only one of the orders in actually stored in the database. There is an extensive literature on the use of word-association measures for NLP, especially for detection of collocations (Pecina, 2010; Evert, 2008; Futagi et al., 2008). The use of pointwise mutual information with word-space models is noted in (Zhang et al., 2012; Baroni and Lenci, 2010; Mitchell and Lapata, 2008; Turney, 2001). Point-wise mutual information is defined as follows (Church and Hanks, 1990): 2In all texts, we use human-marked paragraphs, indicated either by a new line or by an xml markup. PMI(x, y) = log2 P(x, y) P(x)P(y) (1) Differently from Church and Hanks (1990), we disregard word order when computing P(x, y). All probabilities are estimated using frequencies. We define WAPT – a word association profile of a text T – as the distribution of PMI(x, y) for all pairs of content3 word types (x, y) ∈T. All pairs of word types for which the associations database returned a null value (the pair has never been observed in the same paragraph) are excluded from the calculation. For our main dataset (described later as setA, section 4.1), the average percentage of non-null values per text is 92%. To represent the WAP of a text, we use a 60-bin histogram spanning all PMI values. The lowest bin (shown in Figures 1 and 2 as PMI = –5) contains pairs with PMI≤–5; the topmost bin (shown in Figures 1 and 2 as PMI = 4.83) contains pairs with PMI > 4.67, while the rest of the bins contain word pairs (x, y) with −5 <PMI(x, y) ≤4.67. Each bin in the histogram (apart from the top and the bottom ones) corresponds to a PMI interval of 0.167. We chose a relatively fine-grained binning and performed no optimization for grid selection; for more sophisticated gridding approaches to study non-linear relationships in the data, see Reshef et al. (2011). We will say that a text A is tighter than text B if the WAP of A is shifted towards the higher end of PMI values relative to text B. The intuition behind the terminology is that texts with higher proportions of highly associated pairs are likelier to be more focused, dealing with a small number of topics at greater length, as opposed to texts that bring various different themes into the text to various extents. Thus, the text “The dog barked and wagged its tail” is much tighter than the text “Green ideas sleep furiously”, with all the six content word pairs scoring above PMI=5.5 in the first and below PMI=2.2 in the second.4 3 Illustration: The shape of the distribution For a first illustration, we use a corpus of 5,904 essays written as part of a standardized graduate 3We part-of-speech tag a text using OpenNLP tagger (http://opennlp.apache.org) and only take into account common and proper nouns, verbs, adjectives, and adverbs. 4We omitted colorless from the second example, as colorless is actually highly associated with green (PMI=4.36). 1149 school admission test (a full descrption of these data is given in section 4.1, under setA p1-p6). For each essay, we compute the WAP and represent it using the 60-bin histogram. For each bin in the histogram, we compute its average value over the 5,904 essays; additionally, we compute the 15th and 85th percentiles for each bin, so that the band between them contains values observed for 70% of the texts. The series with the solid thick (blue) line in Figure 1 shows the distribution of the average percentage of word type pairs per bin (essaysav); the dotted lines above and below show the band capturing the middle 70% of the distribution (essays-15 and essays-85). We observe that the shape of the WAP is very stable across essays, and the variation around the average is quite limited. Next, consider the thin solid (green) line with asterisk-shaped markers in Figure 1 that plots a similarly-binned histogram for the normal distribution with µ=0.90 and σ=0.66. We note that for values below PMI=2.17, the normal curve is within or almost within the 70% band for the essay data. The divergence occurs at the right tail with PMI>2.17, that covers, on average, about 8% of the pairs (5.6% and 10.4% for the 15th and 85th percentiles, respectively). To get an idea about possible variation in the distribution, we consider two additional corpora from different genres. We use a corpus of Wall Street Journal 1987 articles from the TIPSTER collection.5 We picked articles of 250 to 700 words in length, in order to keep the length of texts comparable to the essay data, while varying the genre; 770 such articles were found. The dashed (orange) line in Figure 1 shows the distribution of average values for the WSJ collection (wsj-av). We observe that the shape of the distribution is similar to that of essay data, although WSJ articles tend to be less tight, on average, since the distribution in PMI<2.17 area in the WSJ data is shifted to the left relative to essays. Yet, the picture at the right tail is remarkably similar to that of the essays, with 9% of word pairs, on average, having PMI>2.17. The second additional corpus contains 140 literary texts written or adapted for readers in grades 3 and 4 in US schools (Sheehan et al., 2008). In terms of length, these texts fall into the same range as the other corpora, averaging 507 words. 5LDC93T3A in LDC catalogue The average WAP for these texts is shown with a thin solid (purple) line with circular markers in Figure 1 (Grades 3-4). These texts are much tighter than texts in the other two collections, as the distribution is shifted to the right. The right tail, with PMI>2.17, holds 19% of all word pairs in these texts – more than twice the proportion in essays written by college graduates or in texts from the WSJ. It is instructive to check whether the over-use of highly associated pairs is felt during reading. These texts strike an adult reader as overly explicit, taking the space to state things that an adult reader would readily infer or assume. For example, consider the following opening paragraph: “Grandma Rose gave Daniel a recorder. A recorder is a musical instrument. Daniel learned to play by blowing on the recorder. It didn’t take lots of air. It didn’t take big hands to hold since it was pocket-sized. His fingers covered the toneholes just fine. Soon Daniel played entire songs. His mother loved to listen. Sometimes she hummed along with Daniel’s recorder.” The second and the third sentences state things that for an adult reader would be too obvious to need mention. In fact, these sentences almost seem like training sentences – the kind of sentences from which the associations between recorder and musical instrument, play, blowing can be learned. According to Hoey’s theory of lexical priming (Hoey, 2005), one of the main functions of schooling is to imbue children with the societally sanctioned word associations. To conclude the illustration, we observe that there are some broad similarities between the different copora in terms of the distribution of pairs of word types. Thus, texts seem to be mainly made of pairs of weakly associated words – about half the pairs of word types lie between PMI of 0.5 and 1.5, in all the examined collections (52% for essays, 44% for each of WSJ and young reader corpora). The percentages of pairs at the low and the high ends of PMI differ with genre – writing for children favors the higher end, while typical Wall Street Journal writing favors the low end, relatively to a corpus of essays on general topics written by college graduates. These observations are necessarily very tentative, as only a few corpora were examined. Still, 1150 6 8 10 12 e of pairs of word types essays‐av essays‐15 essays‐85 wsj‐av N(0.90,0.66) Grades 3‐4 0 2 4 ‐5 ‐4 ‐3 ‐2 ‐1 0 1 2 3 4 5 Percentage PMI Figure 1: WAP histograms for three corpora, shown with smooth lines instead of bars for readability. Average for essays (a thick solid blue line), average for WSJ articles (a dashed orange line); average for Grades 3-4 corpus (a thin solid purple line with round markers). Normal distribution is shown with a thin solid green line with asterisk markers. Middle 70% of essays fall between the dotted lines. we believe the illustration is suggestive, in that there is both constancy in writing for a similar purpose (observe the limited variation around the average that captures 70% of the essays) and variation with genre and target audience. In what follows, we will explore more thoroughly the information provided by word association profiles regarding the quality of writing. 4 Application to Essay Scoring Texts written for a test and scored by relevant professionals is a setting where variation in text quality is expected. In this section, we report our experiments with using WAPs to explore the variation in quality as quantified by essay scores. We first describe the data (section 4.1), then show the patterns of relationships between essay scores and word association profiles (section 4.2). Finally, we report on an experiment where we significantly improve the performance of a very competitive, state-of-art system for automated scoring of essays, using a feature derived from WAP. 4.1 Data We consider two collections of essays written as responses in an analytical writing section of a high-stakes standardized test for graduate school admission; the time limit for essay composition was 45 minutes. Essays were written in response to a prompt (essay question). A prompt is usually a general statement, and the test-taker is asked to develop an argument supporting or refuting the statement. Example prompts are: “High-speed electronic communications media, such as electronic mail and television, tend to prevent meaningful and thoughtful communication” and “In the age of television, reading books is not as important as it once was. People can learn as much by watching television as they can by reading books.” The first collection (henceforth, setA) contains 8,899 essays written in response to nine different prompts, about 1,000 per prompt;6 the per-prompt subsets will be termed setA-p1 through setA-p9. Each essay in setA was scored by 1 to 4 human raters on a scale of 1 to 6; the majority of essays received 2 human scores. We use the average of the available human scores as the gold-standard score for the essay. Most essays thereby receive an integer score,7 so the ranking of the essays is coarse. From this set, p1-p6 were used for feature selection, data visualization, and estimation of the regression models (training), while sets p7-p9 were reserved for a blind test. The second collection (henceforth, setB) con6While we sampled exactly 1,000 essays per prompt, we removed empty responses, resulting in 975 to 1,000 essays per sample. 7as the two raters agree most of the time 1151 tains 400 essays, with 200 essays written on each of two prompts given as examples above (setB-p1 and setB-p2). In an experimental study by Attali et al. (2013), each essay was scored by 16 professional raters on a scale of 1 to 6, allowing plus and minus scores as well, quantified as 0.33 – thus, a score of 4- is rendered as 3.67. This fine-grained scale resulted in higher mean pairwise inter-rater correlations than the traditional integer-only scale (r=0.79 vs around r=0.70 for the operational scoring). We use the average of 16 raters as the final grade for each essay. This dataset provides a very fine-grained ranking of the essays, with almost no two essays getting exactly the same score. Rounded setA p1-p9 setB Score av min max p1 p2 1 .01 .00 .01 – – 2 .05 .04 .06 .03 .03 3 .25 .20 .29 .30 .28 4 .44 .42 .47 .54 .55 5 .21 .16 .24 .13 .14 6 .04 .02 .07 .01 .02 Table 1: Score distribution in the essay data. For the sake of presentation in this table, all scores were rounded to integer scores, so a score of 3.33 was counted as 3, and a score of 3.5 was counted as 4. A cell with the value of .13 (row titled 5 and column titled SetB p1) means that 13% of the essays in setB-p1 received scores that round to 5. For setA, average, minimum, and maximum values across the nine prompts are shown. Table 1 shows the distribution of rounded scores in both collections. Average essay scores are between 3.74 to 3.98 across the different prompts from both collections. The use of 16 raters seems to have moved the rounded scores towards the middle; however, the relative ranking of the essays is much more delicate in setB than in setA. 4.2 Essay Score vs WAP We calculated correlations between essay score and the proportion of word pairs in each of the 60 bins of the WAP histogram, separately for each of the prompts p1-p6 in setA. For a sample of 1,000 instances, a correlation of r=0.065 is significant at p = 0.05. Figure 2 plots the correlations. First, we observe that, perhaps contrary to expectation, the proportion of the highest values of PMI (the area to the right of PMI=4 in Figure 2) does not yield a consistent correlation with essay scores. Thus, inasmuch as highest PMI values tend to capture multi-word expressions (South and Africa; Merill and Lynch), morphological variants (bids and bidding), or synonyms (mergers and takeovers), their proportion in word type pairs does not seem to give a clear signal regarding the quality of writing.8 In contrast, the area of moderately high PMI values (from PMI=2.5 to PMI=3.67 in Figure 2) produces a very consistent picture, with only two points out of 48 in that interval9 lacking significant positive correlation with essay score (p2 at PMI=3.17 and p5 at PMI=3). Next, observe the consistent negative correlations between essay score and the proportion of word pairs in bins PMI=0.833 through PMI=1.5. Here again, out of the 30 data points corresponding to these values, only 3 failed to reach statistical significance, although the trend there is still negative. Finally, there is a trend towards a positive correlation between essay scores and the proportion of mildly negative PMI values (-2<PMI<0), that is, better essays tend to use more pairs of disassociated words, although this trend is not as clear-cut as the one on the right-hand side of the distribution. Assuming that a higher proportion of high PMI pairs corresponds to more topic development and that a higher proportion of negative PMIs correponds to more creative use of language (in that pairs are chosen that do not generally tend to appear together), it seems that the better essays are both more topical and more creative than the lower scoring ones. In what follows, we check whether the information about essay quality provided by WAP can be used to improve essay scoring. 8It is also possible that some of the instances with very high PMI are pairs that contain low frequency words for which the database predicts a spuriously high PMI based on a single (and a-typical) co-occurrence that happens to repeat in an essay – similar to the Schwartz eschews example in (Manning and Sch¨utze, 1999, Table 5.16, p. 181). On the one hand, we do not expect such pairs to occur in any systematic pattern, so they could obscure an otherwise more systematic pattern in the high PMI bins. On the other hand, we do not expect to see many such pairs, simply because a repetition of an a-typical event is likely to be very rare. We thank an anonymous reviewer for suggesting this direction, and leave a more detailed examination of the pairs in the highest-PMI bins to future work. 9There are 8 bins of width of 0.167 in the given interval, with 6 datapoints per bin. 1152 ‐0.1 0 0.1 0.2 0.3 ‐5 ‐4 ‐3 ‐2 ‐1 0 1 2 3 4 5 relation with Essay Score p1 p2 p3 p4 p5 p6 ‐0.4 ‐0.3 ‐0.2 Pearson Corr PMI Figure 2: Correlations with essay score for various bins of the WAP histogram. P1 to P6 correspond to the first 6 prompts in SetA. 4.3 Baseline As a baseline, we use e-rater (Attali and Burstein, 2006), a state-of-art essay scoring system developed at Educational Testing Service.10 E-rater computes more than 100 micro-features, which are aggregated into macro-features aligned with specific aspects of the writing construct. The system incorporates macro-features measuring grammar, usage, mechanics, style, organization and development, lexical complexity, and vocabulary usage. Table 2 gives examples of micro-features covered by the different macro-features. E-rater models are built using linear regression on large samples of test-taker essays. We use a generic e-rater model built at Educational Testing Service using essays across a variety of writing prompts, with no connection to the current project and its authors. This model obtains Pearson correlations of r=0.8324-0.8721 with the human scores on setA, and the staggering r=0.9191 and r=0.9146 with the human scores on setB-p1 and setB-p2, respectively. This is a very competitive baseline, as e-rater features explain more than 70% of the variation in essay scores on a relatively coarse scale (setA) and more than 80% of the variation in scores on a fine-grained scale (setB). 10http://www.ets.org/erater/about/ MacroExample Micro-Features Feature Grammar, agreement errors Usage, and verb formation errors Mechanics missing punctuation Style passive very long or short sentences excessive repetition Organization use of discourse elements: and thesis, support, conclusion Development Lexical average word frequency Complexity average word length Vocabulary similarity to vocabulary in high- vs low-scoring essays Table 2: Features used in e-rater (Attali and Burstein, 2006). 4.4 Adding WAP We define HAT – high associative tightness – as the percentage of word type pairs with 2.33<PMI≤3.67 (bins PMI=2.5 through PMI=3.67). This range correponds to the longest sequence of adjacent bins in the PMI>0 area that had a positive correlation with essay score in the setA-p1 set. The HAT feature attains significant 1153 (at p = 0.05) correlations with essay scores, r=0.11 to r=0.27 for the prompts in setA, and r=0.22 and r=0.21 for the two prompts in setB. We note that the HAT feature is not correlated with essay length. Essay length is not used as a feature in e-rater models, but it typically correlates strongly with the human essay score (at about r=0.70 in our data), as well as with the score provided by e-rater (at about r=0.80). We also explored a feature that captured the area with the negative correlations identified in section 4.2. This feature did not succeed in improving the performance over the baseline on setA p1-p6; we tentatively conclude that information contained in that feature, i.e. the proprotion of mildly associated vocabulary in an essay, is indirectly captured by another feature or group of features already present in e-rater. Likewise, a feature that calculates the average PMI for all pairs of content word types in the text failed to produce an improvement over the baseline for setA p1-p6. The reason for this can be observed in Figure 2: The higher-scoring essays having more of both the low and the high PMI pairs leads to about the same average PMI as for the lower-scoring essays that have a higher concentration of values closer to the average PMI. 4.5 Evaluation To evaluate the usefulness of WAP in improving automated scoring of essays, we estimate a linear regression model using the human score as a dependent variable (label) and e-rater score and the HAT as the two independent variables (features). The correlations between the two independent variables (e-rater and HAT) are between r=0.11 and r=0.24 on the prompts in setA and setB. We estimate a regression model on each of setA-pi, i ∈{1, .., 6}, and evaluate them on each of setA-pj, j ∈{7, .., 9}, and compare the performance with that of e-rater alone on setA-pj. Note that e-rater itself is not trained on any of the data in setA and setB; we use the same e-rater model for all evaluations, a generic model that was pretrained on a large number of essays across different prompts. For setB, we estimate the regression model on setB-p1 and test on setB-p2, and vice versa. Table 3 shows the evaluation results. The HAT feature leads to a statistically significant improveTrain Test E-rater E-rater+HAT t on Test on Test setA p1 p7 0.84043 0.84021 -0.371 p2 p7 0.84043 0.84045 0.408 p3 p7 0.84043 0.83999 -0.597 p4 p7 0.84043 0.84044 0.411 p5 p7 0.84043 0.84028 -0.280 p6 p7 0.84043 0.83926 -1.080 p1 p8 0.83244 0.83316 1.688 p2 p8 0.83244 0.83250 2.234 p3 p8 0.83244 0.83327 1.530 p4 p8 0.83244 0.83250 2.237 p5 p8 0.83244 0.83311 1.752 p6 p8 0.83244 0.83339 1.191 p1 p9 0.86370 0.86612 4.282 p2 p9 0.86370 0.86389 5.205 p3 p9 0.86370 0.86659 4.016 p4 p9 0.86370 0.86388 5.209 p5 p9 0.86370 0.86591 4.390 p6 p9 0.86370 0.86730 3.448 setB p1 p2 0.9146 0.9178 0.983 p2 p1 0.9191 0.9242 2.690 Table 3: Performance of baseline model (e-rater) and models where e-rater was augmented with HAT, a feature based on the word association profile. Performance is measured using Pearson correlation with essay score. We use Wilcoxon Signed-Ranked test for matched pairs, and report the sum of signed ranks (W), the number of ranks (n), and the p value. E-rater+HAT is significantly better than e-rater alone, W=138, n=20, p<0.05. We also measure significance of the improvement for each row individually, using McNemar’s test for significance of difference in same-sample correlations (McNemar, 1955, p.148); we report the t value for each test. For values of t > 1.645, we can reject the hypothesis that e-rater+HAT is not better than e-rater alone with 95% confidence. Significant improvements are underlined. 1154 ment in the performance of automated scoring. An improvement is observed for 14 out of the 18 evaluations for setA, as well as for both evaluations for setB.11 Moreover, the largest relative improvement of 0.55%, from 0.9191 to 0.9242, was observed for the setting with the highest baseline performance, suggesting that the HAT feature is still effective even after the delicate ranking of the essays revealed an exceptionally strong performance of e-rater. 5 Related Work Most of the attention in the computational linguistics research that deals with analysis of the lexis of texts has so far been paid to what in our terms would be the very high end of the word association profile. Thus, following Halliday and Hasan (1976), Hoey (1991), and Morris and Hirst (1991), the notion of lexical cohesion has been used to capture repetitions of words and occurrence of words with related meanings in a text. Lexically cohesive words are traced through the text, forming lexical chains or graphs, and these representations are used in a variety of applications, such as segmentation, keyword extraction, summarization, sentiment analysis, temporal indexing, hypelink generation, error correction (Guinaudeau et al., 2012; Marathe and Hirst, 2010; Ercan and Cicekli, 2007; Devitt and Ahmad, 2007; Hirst and Budanitsky, 2005; Inkpen and D´esilets, 2005; Gurevych and Strube, 2004; Stokes et al., 2004; Silber and McCoy, 2002; Green, 1998; Al-Halimi and Kazman, 1998; Barzilay and Elhadad, 1997). To our knowledge, lexical cohesion has not so far been used for automated scoring of essays. Our results suggest that this direction is promising, as merely the proportion of highly associated word pairs is already contributing a clear signal regarding essay quality; it is possible that additional information can be derived from richer representations common in the lexical cohesion literature. Aspects related to the distribution of words in essays have been studied in relation to essay scoring. One line of work focuses on assessing coherence of essays. Foltz et al. (1998) use Latent 11We also performed a cross-validation test on setA p1p6, where we estimated a regression model on setA-pi and evaluate it on setA-pj, for all i, j ∈{1, .., 6}, i ̸= j, and compared the performance with that of e-rater alone on setApj, yielding 30 different train-test combinations. The results were similar to those of the blind test presented here, with erater+HAT significantly improving upon e-rater alone, using Wilcoxon test, W=374, n=29, p<0.05. Semantic Analysis to model the smoothness of transitions between adjacent segments of an essay. Higgins et al. (2004) compare sentences from certain discourse segments in an essay to determine their semantic similarity, such as comparing thesis statements to conclusions or thesis statements to essay prompts. Additional approaches include evaluation of coherence based on repeated reference to entities (Burstein et al., 2010; Barzilay and Lapata, 2008; Miltsakaki and Kukich, 2004). Our approach is different in that it does not measure the flow of the text, that is, the sequencing and repetition of the words, but rather assesses the choice of vocabulary as a whole. Topic models have been proposed as a technique for capturing clusters of related words that tend to occur in the same documents in a given collection. A text is modeled as being composed of a small number of topics, and words in the text are generated conditioned on the selected topics (Gruber et al., 2007; Blei et al., 2003). Since (a) topics encapsulate clusters of highly associated words, and (b) topics for a given text are modeled as being chosen independently from each other, we expect a negative correlation between the number of topics in a document and the tightness of the word association profile of the text. An alternative representation of word association profile would be a weighted graph, where the weights correspond to pairwise associations between words. Thus, for longer texts, graph analysis techniques would be applicable. Steyvers and Tenenbaum (2005) analyze the graphs induced from large repositories like WordNet or databases of free associations, and find them to be scale-free and small-world; it is an open question whether word association graphs induced from book-length texts would exhibit similar properties. In the theoretical tradition, our work is closest in spirit to Michael Hoey’s theory of lexical priming (Hoey, 2005), positing that users of language internalize patterns of occurrence and non-occurrence of words not only with other words, but also in certain positions in a text, in certain syntactic environments, and in certain evaluative contexts, and use these when creating their own texts. We believe that word association profiles reflect the artwork that goes into using those internalized associations between words when creating a text, achieving the right mix of strong and weak, positive and negative associations. 1155 6 Conclusion In this paper, we described a new representation of the content vocabulary of a text we call word association profile that captures the proportions of highly associated, mildly associated, unassociated, and dis-associated pairs of words selected to co-exist in the given text by its author. We observed that the shape of the distribution is quite stable across various texts, with about half the pairs having a mild association; the allocation of pairs to the higher and the lower levels of association does vary across genres and target audiences. We further presented a study of the relationship between quality of writing and word association profiles. For a dataset of essays written by college graduates on a number of general topics in a standardized test for graduate school admission and scored by professional raters, we showed that the higher scoring essays tend to have higher percentages of both highly associated and dis-associated pairs, and lower percentagese of mildly associated pairs of words. We hypothesize that this pattern is consistent with the better essays demonstrating both a better topic development (hence the higher percentage of highly related pairs) and a more creative use of language resources, as manifested in a higher percentage of word pairs that generally do not tend to appear together. Finally, we demonstrated that the information provided by word association profiles leads to a significant improvement in a highly competitive, state-of-art essay scoring system that already measures various aspects of writing quality. In future work, we intend to investigate in more detail the contribution of various kinds of words to word association profiles, as well as pursue application to evaluation of text complexity. References Reem Al-Halimi and Rick Kazman. 1998. Temporal indexing through lexical chaining. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 333–351. Cambridge, MA: MIT Press. Yigal Attali and Jill Burstein. 2006. Automated Essay Scoring With e-rater R⃝V.2. Journal of Technology, Learning, and Assessment, 4(3). Yigal Attali, Will Lewis, and Michael Steier. 2013. Scoring with the computer: Alternative procedures for improving reliability of holistic essay scoring. Language Testing, 30(1):125–141. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. Regina Barzilay and Michael Elhadad. 1997. Using lexical chains for text summarization. In Proceedings of ACL Intelligent Scalable Text Summarization Workshop. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1–34. Roberto Basili and Marco Pennacchiotti. 2010. Distributional lexical semantics: Toward uniform representation paradigms for advanced acquisition and processing tasks. Natural Language Engineering, 16(4):347–358. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. John Bullinaria and Joseph Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39:510–526. Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in student essays. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 681–684, Los Angeles, California, June. Association for Computational Linguistics. Kenneth Church and Patrick Hanks. 1990. Word association norms, mutual information and lexicography. Computational Linguistics, 16(1):22–29. Ann Devitt and Khurshid Ahmad. 2007. Sentiment polarity identification in financial news: A cohesionbased approach. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 984–991, Prague, Czech Republic, June. Association for Computational Linguistics. Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 334– 343, Stroudsburg, PA, USA. Association for Computational Linguistics. Gonenc Ercan and Ilyas Cicekli. 2007. Using lexical chains for keyword extraction. Information Processing & Management, 43(6):1705–1714. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 897–906, Honolulu, Hawaii, October. Association for Computational Linguistics. 1156 Stefan Evert. 2008. Corpora and collocations. In A. L¨udeling and M. Kyt¨o, editors, Corpus Linguistics: An International Handbook. Berlin: Mouton de Gruyter. Michael Flor. 2013. A fast and flexible architecture for very large word n-gram datasets. Natural Language Engineering, 19(1):61–93. Peter Foltz, Walter Kintsch, and Thomas Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse Processes, 25(2):285–307. Yoko Futagi, Paul Deane, Martin Chodorow, and Joel Tetreault. 2008. A computational approach to detecting collocation errors in the writing of non-native speakers of English. Computer Assisted Language Learning, 21(4):353–367. David Graff and Christopher Cieri. 2003. English Gigaword LDC2003T05. Linguistic Data Consortium, Philadelphia. Stephen Green. 1998. Automated link generation: Can we do better than term repetition? Computer Networks, 30:75–84. Amit Gruber, Yair Weiss, and Michal Rosen-Zvi. 2007. Hidden topic markov models. Journal of Machine Learning Research - Proceedings Track, 2:163–170. Camille Guinaudeau, Guillaume Gravier, and Pascale S´ebillot. 2012. Enhancing lexical cohesion measure with confidence measures, semantic relations and language model interpolation for multimedia spoken content topic segmentation. Computer Speech and Language, 26(2):90–104. Iryna Gurevych and Michael Strube. 2004. Semantic similarity applied to spoken dialogue summarization. In Proceedings of Coling 2004, pages 764– 770, Geneva, Switzerland, August. COLING. Michael A.K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman, London. Marti Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. Derrick Higgins, Jill Burstein, Daniel Marcu, and Claudia Gentile. 2004. Evaluating multiple aspects of coherence in student essays. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 185–192, Boston, Massachusetts, USA, May. Association for Computational Linguistics. Graeme Hirst and Alexander Budanitsky. 2005. Correcting real-word spelling errors by restoring lexical cohesion. Natural Language Engineering, 11(1):87–111. Michael Hoey. 1991. Patterns of Lexis in Text. Oxford University Press. Michael Hoey. 2005. Lexical Priming. Routledge. Diana Inkpen and Alain D´esilets. 2005. Semantic similarity for detecting recognition errors in automatic speech transcripts. In Proceedings of Empirical Methods in Natural Language Processing Conference, pages 49–56, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Michael Jones and Douglas Mewhort. 2007. Representing word meaning and order information in a composite holographic lexicon. Psychological Review, 114(1):1–37. Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211–240. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of ACL, pages 768–774, Montreal, Canada. Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior Research Methods, Instruments & Computers, 28:203–208. Christopher D. Manning and Hinrich Sch¨utze. 1999. Foundations of statistical natural language processing. MIT Press, Cambridge, MA, USA. Meghana Marathe and Graeme Hirst. 2010. Lexical Chains Using Distributional Measures of Concept Distance. In Proceedings of 11th International Conference on Intelligent Text Processing and Computational Linguistics (CICLING), pages 291–302, Iasi, Romania, March. Quinn McNemar. 1955. Psychological Statistics. New York: J. Wiley and Sons, 2nd edition. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Hemant Misra, Franc¸ois Yvon, Joemon M. Jose, and Olivier Cappe. 2009. Text segmentation via topic modeling: an analytical study. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM ’09, pages 1553–1556, New York, NY, USA. ACM. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 236–244, Columbus, Ohio, June. Association for Computational Linguistics. Jane Morris and Graeme Hirst. 1991. Lexical cohesion, the thesaurus, and the structure of text. Computational linguistics, 17(1):21–48. 1157 Sebastian Pado and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. Pavel Pecina. 2010. Lexical association measures and collocation extraction. Language Resources and Evaluation, 44:137–158. David Reshef, Yakir Reshef, Hilary Finucane, Sharon Grossman, Gilean McVean, Peter Turnbaugh, Eric Lander, Michael Mitzenmacher, and Pardis Sabeti. 2011. Detecting novel associations in large data sets. Science, 334(6062):1518–1524. Martin Riedl and Chris Biemann. 2012. How text segmentation algorithms gain from topic models. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 553–557, Montr´eal, Canada, June. Association for Computational Linguistics. Gerard Salton, Andrew Wong, and Chung-Shu Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620. Kathy Sheehan, Irene Kostin, and Yoko Futagi. 2008. When do standard approaches for measuring vocabulary difficulty, syntactic complexity and referential cohesion yield biased estimates of text difficulty? In Proceedings of the Cognitive Science Society, pages 1978–1983, Washington, DC, July. Gregory Silber and Kathleen McCoy. 2002. Efficiently computed lexical chains as an intermediate representation for automatic text summarization. Computational Linguistics, 28(4):487–496. Mark Steyvers and Joshua B. Tenenbaum. 2005. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth. Cognitive Science, 29:41–78. Nicola Stokes, Joe Carthy, and Alan F. Smeaton. 2004. Select: A lexical cohesion based news story segmentation system. Journal of AI Communications, 17(1):3–12. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Articial Intelligence Research, 37:141–188. Peter D. Turney. 2001. Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL. In European Conference on Machine Learning, pages 491–502, Freiburg, Germany, September. Ziqi Zhang, Anna Gentile, and Fabio Ciravegna. 2012. Recent advances in methods of lexical semantic relatedness – a survey. Natural Language Engineering, FirstView:1–69. 1158
2013
113
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1159–1168, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Adaptive Parser-Centric Text Normalization Congle Zhang∗ Dept of Computer Science and Engineering University of Washington, Seattle, WA 98195, USA [email protected] Tyler Baldwin Howard Ho Benny Kimelfeld Yunyao Li IBM Research - Almaden 650 Harry Road, San Jose, CA 95120, USA {tbaldwi,ctho,kimelfeld,yunyaoli}@us.ibm.com Abstract Text normalization is an important first step towards enabling many Natural Language Processing (NLP) tasks over informal text. While many of these tasks, such as parsing, perform the best over fully grammatically correct text, most existing text normalization approaches narrowly define the task in the word-to-word sense; that is, the task is seen as that of mapping all out-of-vocabulary non-standard words to their in-vocabulary standard forms. In this paper, we take a parser-centric view of normalization that aims to convert raw informal text into grammatically correct text. To understand the real effect of normalization on the parser, we tie normalization performance directly to parser performance. Additionally, we design a customizable framework to address the often overlooked concept of domain adaptability, and illustrate that the system allows for transfer to new domains with a minimal amount of data and effort. Our experimental study over datasets from three domains demonstrates that our approach outperforms not only the state-of-the-art wordto-word normalization techniques, but also manual word-to-word annotations. 1 Introduction Text normalization is the task of transforming informal writing into its standard form in the language. It is an important processing step for a wide range of Natural Language Processing (NLP) tasks such as text-to-speech synthesis, speech recognition, information extraction, parsing, and machine translation (Sproat et al., 2001). ∗This work was conducted at IBM. The use of normalization in these applications poses multiple challenges. First, as it is most often conceptualized, normalization is seen as the task of mapping all out-of-vocabulary non-standard word tokens to their in-vocabulary standard forms. However, the scope of the task can also be seen as much wider, encompassing whatever actions are required to convert the raw text into a fully grammatical sentence. This broader definition of the normalization task may include modifying punctuation and capitalization, and adding, removing, or reordering words. Second, as with other NLP techniques, normalization approaches are often focused on one primary domain of interest (e.g., Twitter data). Because the style of informal writing may be different in different data sources, tailoring an approach towards a particular data source can improve performance in the desired domain. However, this is often done at the cost of adaptability. This work introduces a customizable normalization approach designed with domain transfer in mind. In short, customization is done by providing the normalizer with replacement generators, which we define in Section 3. We show that the introduction of a small set of domain-specific generators and training data allows our model to outperform a set of competitive baselines, including state-of-the-art word-to-word normalization. Additionally, the flexibility of the model also allows it to attempt to produce fully grammatical sentences, something not typically handled by word-to-word normalization approaches. Another potential problem with state-of-the-art normalization is the lack of appropriate evaluation metrics. The normalization task is most frequently motivated by pointing to the need for clean text for downstream processing applications, such as syntactic parsing. However, most studies of normalization give little insight into whether and to what degree the normalization process improves 1159 the performance of the downstream application. For instance, it is unclear how performance measured by the typical normalization evaluation metrics of word error rate and BLEU score (Papineni et al., 2002) translates into performance on a parsing task, where a well placed punctuation mark may provide more substantial improvements than changing a non-standard word form. To address this problem, this work introduces an evaluation metric that ties normalization performance directly to the performance of a downstream dependency parser. The rest of this paper is organized as follows. In Section 2 we discuss previous approaches to the normalization problem. Section 3 presents our normalization framework, including the actual normalization and learning procedures. Our instantiation of this model is presented in Section 4. In Section 5 we introduce the parser driven evaluation metric, and present experimental results of our model with respect to several baselines in three different domains. Finally, we discuss our experimental study in Section 6 and conclude in Section 7. 2 Related Work Sproat et al. (2001) took the first major look at the normalization problem, citing the need for normalized text for downstream applications. Unlike later works that would primarily focus on specific noisy data sets, their work is notable for attempting to develop normalization as a general process that could be applied to different domains. The recent rise of heavily informal writing styles such as Twitter and SMS messages set off a new round of interest in the normalization problem. Research on SMS and Twitter normalization has been roughly categorized as drawing inspiration from three other areas of NLP (Kobus et al., 2008): machine translation, spell checking, and automatic speech recognition. The statistical machine translation (SMT) metaphor was the first proposed to handle the text normalization problem (Aw et al., 2006). In this mindset, normalizing SMS can be seen as a translation task from a source language (informal) to a target language (formal), which can be undertaken with typical noisy channel based models. Work by Choudhury et al. (2007) adopted the spell checking metaphor, casting the problem in terms of character-level, rather than word-level, edits. They proposed an HMM based model that takes into account both grapheme and phoneme information. Kobus et al. (2008) undertook a hybrid approach that pulls inspiration from both the machine translation and speech recognition metaphors. Many other approaches have been examined, most of which are at least partially reliant on the above three metaphors. Cook and Stevenson (2009) perform an unsupervised method, again based on the noisy channel model. Pennell and Liu (2011) developed a CRF tagger for deletion-based abbreviation on tweets. Xue et al. (2011) incorporated orthographic, phonetic, contextual, and acronym expansion factors to normalize words in both Twitter and SMS. Liu et al. (2011) modeled the generation process from dictionary words to non-standard tokens under an unsupervised sequence labeling framework. Han and Baldwin (2011) use a classifier to detect illformed words, and then generate correction candidates based on morphophonemic similarity. Recent work has looked at the construction of normalization dictionaries (Han et al., 2012) and on improving coverage by integrating different human perspectives (Liu et al., 2012). Although it is almost universally used as a motivating factor, most normalization work does not directly focus on improving downstream applications. While a few notable exceptions highlight the need for normalization as part of textto-speech systems (Beaufort et al., 2010; Pennell and Liu, 2010), these works do not give any direct insight into how much the normalization process actually improves the performance of these systems. To our knowledge, the work presented here is the first to clearly link the output of a normalization system to the output of the downstream application. Similarly, our work is the first to prioritize domain adaptation during the new wave of text message normalization. 3 Model In this section we introduce our normalization framework, which draws inspiration from our previous work on spelling correction for search (Bao et al., 2011). 3.1 Replacement Generators Our input the original, unnormalized text, represented as a sequence x = x1, x2, . . . , xn of tokens xi. In this section we will use the following se1160 quence as our running example: x = Ay1 woudent2 of3 see4 ′em5 where space replaces comma for readability, and each token is subscripted by its position. Given the input x, we apply a series of replacement generators, where a replacement generator is a function that takes x as input and produces a collection of replacements. Here, a replacement is a statement of the form “replace tokens xi, . . . , xj−1 with s.” More precisely, a replacement is a triple ⟨i, j, s⟩, where 1 ≤i ≤j ≤n + 1 and s is a sequence of tokens. Note that in the case where i = j, the sequence s should be inserted right before xi; and in the special case where s is empty, we simply delete xi, . . . , xj−1. For instance, in our running example the replacement ⟨2, 3, would not⟩replaces x2 = woudent with would not; ⟨1, 2, Ay⟩replaces x1 with itself (hence, does not change x); ⟨1, 2, ϵ⟩(where ϵ is the empty sequence) deletes x1; ⟨6, 6, .⟩inserts a period at the end of the sequence. The provided replacement generators can be either generic (cross domain) or domain-specific, allowing for domain customization. In Section 4, we discuss the replacement generators used in our empirical study. 3.2 Normalization Graph Given the input x and the set of replacements produced by our generators, we associate a unique Boolean variable Xr with each replacement r. As expected, Xr being true means that the replacement r takes place in producing the output sequence. Next, we introduce dependencies among variables. We first discuss the syntactic consistency of truth assignments. Let r1 = ⟨i1, j1, s1⟩and r2 = ⟨i2, j2, s2⟩be two replacements. We say that r1 and r2 are locally consistent if the intervals [i1, j1) and [i2, j2) are disjoint. Moreover, we do not allow two insertions to take place at the same position; therefore, we exclude [i1, j1) and [i2, j2) from the definition of local consistency when i1 = j1 = i2 = j2. If r1 and r2 are locally consistent and j1 = i2, then we say that r2 is a consistent follower of r1. A truth assignment α to our variables Xr is sound if every two replacements r and r′ with α(Xr) = α(Xr′) = true are locally consistent. We say that α is complete if every token of x is captured by at least one replacement r with α(Xr) = true. Finally, we say that α is legal if it is sound and complete. The output (normalized sequence) defined by a legal assignment α is, naturally, the concatenation (from left to right) of the strings s in the replacements r = ⟨i, j, s⟩with α(Xr) = true. In Figure 1, for example, if the nodes with a grey shade are the ones associated with true variables under α, then the output defined by α is I would not have seen them. Our variables carry two types of interdependencies. The first is that of syntactic consistency: the entire assignment is required to be legal. The second captures correlation among replacements. For instance, if we replace of with have in our running example, then the next see token is more likely to be replaced with seen. In this work, dependencies of the second type are restricted to pairs of variables, where each pair corresponds to a replacement and a consistent follower thereof. The above dependencies can be modeled over a standard undirected graph using Conditional Random Fields (Lafferty et al., 2001). However, the graph would be complex: in order to model local consistency, there should be edges between every two nodes that violate local consistency. Such a model renders inference and learning infeasible. Therefore, we propose a clearer model by a directed graph, as illustrated in Figure 1 (where nodes are represented by replacements r instead of the variables Xr, for readability). To incorporate correlation among replacements, we introduce an edge from Xr to Xr′ whenever r′ is a consistent follower of r. Moreover, we introduce two dummy nodes, start and end, with an edge from start to each variable that corresponds to a prefix of the input sequence x, and an edge from each variable that corresponds to a suffix of x to end. The principal advantage of modeling the dependencies in such a directed graph is that now, the legal assignments are in one-to-one correspondence with the paths from start to end; this is a straightforward observation that we do not prove here. We appeal to the log-linear model formulation to define the probability of an assignment. The conditional probability of an assignment α, given an input sequence x and the weight vector Θ = ⟨θ1, . . . , θk⟩for our features, is defined as p(α | 1161 ⟨1, 2, I⟩ end ⟨2, 4, would not have⟩ ⟨1, 2, Ay⟩ ⟨5, 6, them⟩ ⟨4, 5, seen⟩ ⟨2, 3, would⟩ ⟨4, 6, see him⟩ ⟨3, 4, of⟩ start ⟨6, 6, .⟩ Figure 1: Example of a normalization graph; the nodes are replacements generated by the replacement generators, and every path from start to end implies a legal assignment x, Θ) = 0 if α is not legal, and otherwise, p(α | x, Θ) = 1 Z(x) Y X→Y ∈α exp( X j θjφj(X, Y, x)) . Here, Z(x) is the partition function, X →Y ∈α refers to an edge X →Y with α(X) = true and α(Y ) = true, and φ1(X, Y, x), . . . , φk(X, Y, x) are real valued feature functions that are weighted by θ1, . . . , θk (the model’s parameters), respectively. 3.3 Inference When performing inference, we wish to select the output sequence with the highest probability, given the input sequence x and the weight vector Θ (i.e., MAP inference). Specifically, we want an assignment α⋆= arg maxα p(α | x, Θ). While exact inference is computationally hard on general graph models, in our model it boils down to finding the longest path in a weighted and acyclic directed graph. Indeed, our directed graph (illustrated in Figure 1) is acyclic. We assign the real value P j θjφj(X, Y, x) to the edge X →Y , as the weight. As stated in Section 3.2, a legal assignment α corresponds to a path from start to end; moreover, the sum of the weights on that path is equal to log p(α | x, Θ) + log Z(x). In particular, a longer path corresponds to an assignment with greater probability. Therefore, we can solve the MAP inference within our model by finding the weighted longest path in the directed acyclic graph. The algorithm in Figure 2 summarizes the inference procedure to normalize the input sequence x. Input: 1. A sequence x to normalize; 2. A weight vector Θ = ⟨θ1, . . . , θk⟩. Generate replacements: Apply all replacement generators to get a set of replacements r, each r is a triple ⟨i, j, s⟩. Build a normalization graph: 1. For each replacement r, create a node Xr. 2. For each r′ and r, create an edge Xr to Xr′ if r′ is a consistent follower of r. 3. Create two dummy nodes start and end, and create edges from start to all prefix nodes and end to all suffix nodes. 4. For each edge X →Y , compute the features φj(X, Y, x), and weight the edge by P j θjφj(X, Y, x). MAP Inference: Find a weighted longest path P from start to end, and return α∗, where α∗(Xr) = true iff Xr ∈P. Figure 2: Normalization algorithm 3.4 Learning Our labeled data consists of pairs (xi, ygold i ), where xi is an input sequence (to normalize) and ygold i is a (manually) normalized sequence. We obtain a truth assignment αgold i from each ygold i by selecting an assignment α that minimizes the edit distance between ygold i and the normalized text implied by α: αgold i = arg min α DIST(y(α), ygold i ) (1) Here, y(α) denotes the normalized text implied by α, and DIST is a token-level edit distance. We apply a simple dynamic-programming algorithm to compute αgold i . Finally, the items in our training data are the pairs (xi, αgold i ). Learning over similar models is commonly done via maximum likelihood estimation: L(Θ) = log Y i p(αi = αgold i | xi, Θ) Taking the partial derivative gives the following: X i  Φj(αgold i , xi) −Ep(αi|xi,Θ)Φj(αi, xi)  where Φj(α, x) = P X→Y φj(X, Y, x), that is, the sum of values for the jth feature along the 1162 Input: 1. A set {(xi, ygold i )} n i=1 of sequences and their gold normalization; 2. Number T of iterations. Initialization: Initialize each θj as zero, and obtain each αgold i according to (1). Repeat T times: 1. Infer each α∗ i from xi using the current Θ; 2. θj ←θj+P i(Φj(αgold i , xi)−Φj(α∗ i , xi)) for all j = 1, . . . , k. Output: Θ = ⟨θ1, . . . , θk⟩ Figure 3: Learning algorithm path defined by α, and Ep(αi|xi,Θ)Φj(αi, xi) is the expected value of that sum (over all legal assignments αi), assuming the current weight vector. How to efficiently compute Ep(αi|xi,Θ)Φj(αi, xi) in our model is unclear; naively, it requires enumerating all legal assignments. We instead opt to use a more tractable perceptron-style algorithm (Collins, 2002). Instead of computing the expectation, we simply compute Φj(α∗ i , xi), where α∗ i is the assignment with the highest probability, generated using the current weight vector. The result is then: X i  Φj(αgold i , xi) −Φj(α∗ i , xi)  Our learning applies the following two steps iteratively. (1) Generate the most probable sequence within the current weights. (2) Update the weights by comparing the path generated in the previous step to the gold standard path. The algorithm in Figure 3 summarizes the procedure. 4 Instantiation In this section, we discuss our instantiation of the model presented in the previous section. In particular, we describe our replacement generators and features. 4.1 Replacement Generators One advantage of our proposed model is that the reliance on replacement generators allows for strong flexibility. Each generator can be seen as a black box, allowing replacements that are created heuristically, statistically, or by external tools to be incorporated within the same framework. Generator From To leave intact good good edit distance bac back lowercase NEED need capitalize it It Google spell disspaear disappear contraction wouldn’t would not slang language ima I am going to insert punctuation ϵ . duplicated punctuation !? ! delete filler lmao ϵ Table 1: Example replacement generators To build a set of generic replacement generators suitable for normalizing a variety of data types, we collected a set of about 400 Twitter posts as development data. Using that data, a series of generators were created; a sample of them are shown in Table 1. As shown in the table, these generators cover a variety of normalization behavior, from changing non-standard word forms to inserting and deleting tokens. 4.2 Features Although the proposed framework supports real valued features, all features in our system are binary. In total, we used 70 features. Our feature set pulls information from several different sources: N-gram: Our n-gram features indicate the frequency of the phrases induced by an edge. These features are turned into binary ones by bucketing their log values. For example, on the edge from ⟨1, 2, I⟩to ⟨2, 3, would⟩such a feature will indicate whether the frequency of I would is over a threshold. We use the Corpus of Contemporary English (Davies, 2008 ) to produce our n-gram information. Part-of-speech: Part-of-speech information can be used to produce features that encourage certain behavior, such as avoiding the deletion of noun phrases. We generate part-of-speech information over the original raw text using a Twitter part-of-speech tagger (Ritter et al., 2011). Of course, the part-of-speech information obtained this way is likely to be noisy, and we expect our learning algorithm to take that into account. Positional: Information from positions is used primarily to handle capitalization and punctuation insertion, for example, by incorporating features for capitalized words after stop punctuation or the insertion of stop punctuation at the end of the sentence. Lineage: Finally, we include binary features 1163 that indicate which generator spawned the replacement. 5 Evaluation In this section, we present an empirical study of our framework. The study is done over datasets from three different domains. The goal is to evaluate the framework in two aspects: (1) usefulness for downstream applications (specifically dependency parsing), and (2) domain adaptability. 5.1 Evaluation Metrics A few different metrics have been used to evaluate normalizer performance, including word error rate and BLEU score. While each metric has its pros and cons, they all rely on word-to-word matching and treat each word equally. In this work, we aim to evaluate the performance of a normalizer based on how it affects the performance of downstream applications. We find that the conventional metrics are not directly applicable, for several reasons. To begin with, the assumption that words have equal weights is unlikely to hold. Additionally, these metrics tend to ignore other important non-word information such as punctuation or capitalization. They also cannot take into account other aspects that may have an impact on downstream performance, such as the word reordering as seen in the example in Figure 4. Therefore, we propose a new evaluation metric that directly equates normalization performance with the performance of a common downstream application—dependency parsing. To realize our desired metric, we apply the following procedure. First, we produce gold standard normalized data by manually normalizing sentences to their full grammatically correct form. In addition to the word-to-word mapping performed in typical normalization gold standard generation, this annotation procedure includes all actions necessary to make the sentence grammatical, such as word reordering, modifying capitalization, and removing emoticons. We then run an off-the-shelf dependency parser on the gold standard normalized data to produce our gold standard parses. Although the parser could still produce mistakes on the grammatical sentences, we feel that this provides a realistic benchmark for comparison, as it represents an upper bound on the possible performance of the parser, and avoids an expensive second round of manual annotation. Test Gold SVO I kinda wanna get ipad NEW I kind of want to get a new iPad. verb(get) verb(want) verb(get) precisionv = 1 1 recallv = 1 2 subj(get,I) subj(get,wanna) obj(get,NEW) subj(want,I) subj(get,I) obj(get,iPad) precisionso = 1 3 recallso = 1 3 Figure 4: The subjects, verbs, and objects identified on example test/gold text, and corresponding metric scores To compare the parses produced over automatically normalized data to the gold standard, we look at the subjects, verbs, and objects (SVO) identified in each parse. The metric shown in Equations (2) and (3) below is based on the identified subjects and objects in those parses. Note that SO denotes the set of identified subjects and objects whereas SOgold denotes the set of subjects and objects identified when parsing the gold-standard normalization. precisionso = |SO ∩SOgold| |SO| (2) recallso = |SO ∩SOgold| |SOgold| (3) We similarly define precisionv and recallv, where we compare the set V of identified verbs to V gold of those found in the gold-standard normalization. An example is shown in Figure 4. 5.2 Results To establish the extensibility of our normalization system, we present results in three different domains: Twitter posts, Short Message Service (SMS) messages, and call-center logs. For Twitter and SMS messages, we used established datasets to compare with previous work. As no established call-center log dataset exists, we collected our own. In each case, we ran the proposed system with two different configurations: one using only the generic replacement generators presented in Section 4 (denoted as generic), and one that adds additional domain-specific generators for the corresponding domain (denoted as domain-specific). All runs use ten-fold cross validation for training and evaluation. The Stanford parser1 (Marneffe et al., 2006) was used to produce all dependency 1Version 2.0.4, http://nlp.stanford.edu/ software/lex-parser.shtml 1164 parses. We compare our system to the following baseline solutions: w/oN: No normalization is performed. Google: Output of the Google spell checker. w2wN: The output of the word-to-word normalization of Han and Baldwin (2011). Not available for call-center data. Gw2wN: The manual gold standard word-toword normalizations of previous work (Choudhury et al., 2007; Han and Baldwin, 2011). Not available for call-center data. Our results use the metrics of Section 5.1. 5.2.1 Twitter To evaluate the performance on Twitter data, we use the dataset of randomly sampled tweets produced by (Han and Baldwin, 2011). Because the gold standard used in this work only provided word mappings for out-of-vocabulary words and did not enforce grammaticality, we reannotated the gold standard data2. Their original gold standard annotations were kept as a baseline. To produce Twitter-specific generators, we examined the Twitter development data collected for generic generator production (Section 4). These generators focused on the Twitter-specific notions of hashtags (#), ats (@), and retweets (RT). For each case, we implemented generators that allowed for either the initial symbol or the entire token to be deleted (e.g., @Hertz to Hertz, @Hertz to ϵ). The results are given in Table 2. As shown, the domain-specific generators yielded performance significantly above the generic ones and all baselines. Even without domain-specific generators, our system outperformed the word-to-word normalization approaches. Most notably, both the generic and domain-specific systems outperformed the gold standard word-to-word normalizations. These results validate the hypothesis that simple word-to-word normalization is insufficient if the goal of normalization is to improve dependency parsing; even if a system could produce perfect word-to-word normalization, it would produce lower quality parses than those produced by our approach. 2Our results and the reannotations of the Twitter and SMS data are available at https://www.cs.washington. edu/node/9091/ System Verb Subject-Object Pre Rec F1 Pre Rec F1 w/oN 83.7 68.1 75.1 31.7 38.6 34.8 Google 88.9 78.8 83.5 36.1 46.3 40.6 w2wN 87.5 81.5 84.4 44.5 58.9 50.7 Gw2w 89.8 83.8 86.7 46.9 61.0 53.0 generic 91.7 88.9 90.3 53.6 70.2 60.8 domain specific 95.3 88.7 91.9 72.5 76.3 74.4 Table 2: Performance on Twitter dataset 5.2.2 SMS To evaluate the performance on SMS data, we use the Treasure My Text data collected by Choudhury et al. (2007). As with the Twitter data, the word-to-word normalizations were reannotated to enforce grammaticality. As a replacement generator for SMS-specific substitutions, we used a mapping dictionary of SMS abbreviations.3 No further SMS-specific development data was needed. Table 3 gives the results on the SMS data. The SMS dataset proved to be more difficult than the Twitter dataset, with the overall performance of every system being lower. While this drop of performance may be a reflection of the difference in data styles between SMS and Twitter, it is also likely a product of the collection methodology. The collection methodology of the Treasure My Text dataset dictated that every message must have at least one mistake, which may have resulted in a dataset that was noisier than average. Nonetheless, the trends on SMS data mirror those on Twitter data, with the domain-specific generators achieving the greatest overall performance. However, while the generic setting still manages to outperform most baselines, it did not outperform the gold word-to-word normalization. In fact, the gold word-to-word normalization was much more competitive on this data, outperforming even the domain-specific system on verbs alone. This should not be seen as surprising, as word-to-word normalization is most likely to be beneficial for cases like this where the proportion of non-standard tokens is high. It should be noted that the SMS dataset as available has had all punctuation removed. While this may be appropriate for word-to-word normalization, this preprocessing may have an effect on the parse of the sentence. As our system has the ability to add punctuation but our baseline systems do not, this has the potential to artificially inflate our results. To ensure a fair comparison, we manually 3http://www.netlingo.com/acronyms.php 1165 System Verb Subject-Object Rec Pre F1 Rec Pre F1 w/oN 76.4 48.1 59.0 19.5 21.5 20.4 Google 85.1 61.6 71.5 22.4 26.2 24.1 w2wN 78.5 61.5 68.9 29.9 36.0 32.6 Gw2wN 87.6 76.6 81.8 38.0 50.6 43.4 generic 86.5 77.4 81.7 35.5 47.7 40.7 domain specific 88.1 75.0 81.0 41.0 49.5 44.8 Table 3: Performance on SMS dataset System Verb Subject-Object Pre Rec F1 Pre Rec F1 w/oN 98.5 97.1 97.8 69.2 66.1 67.6 Google 99.2 97.9 98.5 70.5 67.3 68.8 generic 98.9 97.4 98.1 71.3 67.9 69.6 domain specific 99.2 97.4 98.3 87.9 83.1 85.4 Table 4: Performance on call-center dataset added punctuation to a randomly selected small subset of the SMS data and reran each system. This experiment suggested that, in contrast to the hypothesis, adding punctuation actually improved the results of the proposed system more substantially than that of the baseline systems. 5.2.3 Call-Center Although Twitter and SMS data are unmistakably different, there are many similarities between the two, such as the frequent use of shorthand word forms that omit letters. The examination of callcenter logs allows us to examine the ability of our system to perform normalization in more disparate domains. Our call-center data consists of textbased responses to questions about a user’s experience with a call-center (e.g., their overall satisfaction with the service). We use call-center logs from a major company, and collect about 150 responses for use in our evaluation. We collected an additional small set of data to develop our callcenter-specific generators. Results on the call-center dataset are in Table 4. As shown, the raw call-center data was comparatively clean, resulting in higher baseline performance than in other domains. Unlike on previous datasets, the use of generic mappings only provided a small improvement over the baseline. However, the use of domain-specific generators once again led to significantly increased performance on subjects and objects. 6 Discussion The results presented in the previous section suggest that domain transfer using the proposed normalization framework is possible with only a small amount of effort. The relatively modest set of additional replacement generators included in each data set allowed the domain-specific approaches to significantly outperform the generic approach. In the call-center case, performance improvements could be seen by referencing a very small amount of development data. In the SMS case, the presence of a domain-specific dictionary allowed for performance improvements without the need for any development data at all. It is likely, though not established, that employing further development data would result in further performance improvements. We leave further investigation to future work. The results in Section 5.2 establish a point that has often been assumed but, to the best of our knowledge, has never been explicitly shown: performing normalization is indeed beneficial to dependency parsing on informal text. The parse of the normalized text was substantially better than the parse of the original raw text in all domains, with absolute performance increases ranging from about 18-25% on subjects and objects. Furthermore, the results suggest that, as hypothesized, preparing an informal text for a parsing task requires more than simple word-to-word normalization. The proposed approach significantly outperforms the state-of-the-art word-to-word normalization approach. Perhaps most interestingly, the proposed approach performs on par with, and in several cases superior to, gold standard word-toword annotations. This result gives strong evidence for the conclusion that parser-targeted normalization requires a broader understanding of the scope of the normalization task. While the work presented here gives promising results, there are still many behaviors found in informal text that prove challenging. One such example is the word reordering seen in Figure 4. Although word reordering could be incorporated into the model as a combination of a deletion and an insertion, the model as currently devised cannot easily link these two replacements to one another. Additionally, instances of reordering proved hard to detect in practice. As such, no reordering-based replacement generators were implemented in the presented system. Another case that proved difficult was the insertion of missing tokens. For instance, the informal sentence “Day 3 still don’t freaking 1166 feel good!:(” could be formally rendered as “It is day 3 and I still do not feel good!”. Attempts to address missing tokens in the model resulted in frequent false positives. Similarly, punctuation insertion proved to be challenging, often requiring a deep analysis of the sentence. For example, contrast the sentence “I’m watching a movie I don’t know its name.” which would benefit from inserted punctuation, with “I’m watching a movie I don’t know.”, which would not. We feel that the work presented here provides a foundation for future work to more closely examine these challenges. 7 Conclusions This work presents a framework for normalization with an eye towards domain adaptation. The proposed framework builds a statistical model over a series of replacement generators. By doing so, it allows a designer to quickly adapt a generic model to a new domain with the inclusion of a small set of domain-specific generators. Tests over three different domains suggest that, using this model, only a small amount of domain-specific data is necessary to tailor an approach towards a new domain. Additionally, this work introduces a parsercentric view of normalization, in which the performance of the normalizer is directly tied to the performance of a downstream dependency parser. This evaluation metric allows for a deeper understanding of how certain normalization actions impact the output of the parser. Using this metric, this work established that, when dependency parsing is the goal, typical word-to-word normalization approaches are insufficient. By taking a broader look at the normalization task, the approach presented here is able to outperform not only state-of-the-art word-to-word normalization approaches but also manual word-to-word annotations. Although the work presented here established that more than word-to-word normalization was necessary to produce parser-ready normalizations, it remains unclear which specific normalization tasks are most critical to parser performance. We leave this interesting area of examination to future work. Acknowledgments We thank the anonymous reviewers of ACL for helpful comments and suggestions. We also thank Ioana R. Stanoi for her comments on a preliminary version of this work, Daniel S. Weld for his support, and Alan Ritter, Monojit Choudhury, Bo Han, and Fei Liu for sharing their tools and data. The first author is partially supported by the DARPA Machine Reading Program under AFRL prime contract numbers FA8750-09-C-0181 and FA8750-09-C-0179. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, AFRL, or the US government. This work is a part of IBM’s SystemT project (Chiticariu et al., 2010). References AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for sms text normalization. In ACL, pages 33–40. Zhuowei Bao, Benny Kimelfeld, and Yunyao Li. 2011. A graph approach to spelling correction in domaincentric search. In ACL, pages 905–914. Richard Beaufort, Sophie Roekhaut, Louise-Am´elie Cougnon, and C´edrick Fairon. 2010. A hybrid rule/model-based finite-state framework for normalizing sms messages. In ACL, pages 770–779. Laura Chiticariu, Rajasekar Krishnamurthy, Yunyao Li, Sriram Raghavan, Frederick Reiss, and Shivakumar Vaithyanathan. 2010. SystemT: An algebraic approach to declarative information extraction. In ACL, pages 128–137. Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar, and Anupam Basu. 2007. Investigation and modeling of the structure of texting language. IJDAR, 10(3-4):157–174. Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP, pages 1–8. Paul Cook and Suzanne Stevenson. 2009. An unsupervised model for text message normalization. In CALC, pages 71–78. Mark Davies. 2008-. The corpus of contemporary american english: 450 million words, 1990present. Avialable online at: http://corpus. byu.edu/coca/. Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In ACL, pages 368–378. 1167 Bo Han, Paul Cook, and Timothy Baldwin. 2012. Automatically constructing a normalisation dictionary for microblogs. In EMNLP-CoNLL, pages 421–432. Catherine Kobus, Franc¸ois Yvon, and G´eraldine Damnati. 2008. Normalizing SMS: are two metaphors better than one? In COLING, pages 441– 448. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289. Fei Liu, Fuliang Weng, Bingqing Wang, and Yang Liu. 2011. Insertion, deletion, or substitution? normalizing text messages without pre-categorization nor supervision. In ACL, pages 71–76. Fei Liu, Fuliang Weng, and Xiao Jiang. 2012. A broad-coverage normalization system for social media language. In ACL, pages 1035–1044. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC-06, pages 449–454. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311– 318. Deana Pennell and Yang Liu. 2010. Normalization of text messages for text-to-speech. In ICASSP, pages 4842–4845. Deana Pennell and Yang Liu. 2011. A character-level machine translation approach for normalization of SMS abbreviations. In IJCNLP, pages 974–982. Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in Tweets: An experimental study. In EMNLP, pages 1524–1534. Richard Sproat, Alan W. Black, Stanley F. Chen, Shankar Kumar, Mari Ostendorf, and Christopher Richards. 2001. Normalization of non-standard words. Computer Speech & Language, 15(3):287– 333. Zhenzhen Xue, Dawei Yin, and Brian D. Davison. 2011. Normalizing microtext. In Analyzing Microtext, volume WS-11-05 of AAAI Workshops. 1168
2013
114
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1169–1179, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Random Walk Approach to Selectional Preferences Based on Preference Ranking and Propagation∗ Zhenhua Tian†, Hengheng Xiang, Ziqi Liu, Qinghua Zheng‡ Ministry of Education Key Lab for Intelligent Networks and Network Security Department of Computer Science and Technology Xi’an Jiaotong University Xi’an, Shaanxi 710049, China {zhhtian†,qhzheng‡}@mail.xjtu.edu.cn Abstract This paper presents an unsupervised random walk approach to alleviate data sparsity for selectional preferences. Based on the measure of preferences between predicates and arguments, the model aggregates all the transitions from a given predicate to its nearby predicates, and propagates their argument preferences as the given predicate’s smoothed preferences. Experimental results show that this approach outperforms several state-of-the-art methods on the pseudo-disambiguation task, and it better correlates with human plausibility judgements. 1 Introduction Selectional preferences (SP) or selectional restrictions capture the plausibility of predicates and their arguments for a given relation. Kaze and Fodor (1963) describe that predicates and their arguments have strict boolean restrictions, either satisfied or violated. Sentences are semantically anomalous and not consistent in reading if they violated the restrictions. Wilks (1973) argues that “rejecting utterances is just what humans do not. They try to understand them.” He further states selectional restrictions as preferences between the predicates and arguments, where the violation can be less preferred, but not fatal. For instance, given the predicate word eat, word food is likely to be its object, iPhone is likely to be implausible for it, and tiger is less preferred but not curious. SP have been proven to help many natural language processing tasks that involve attachment de∗Partial of this work was done when the first author visiting at Language Technologies Institute of Carnegie Mellon University sponsored by the China Scholarship Council. cisions, such as semantic role labeling (Resnik, 1993; Gildea and Jurafsky, 2002), word sense disambiguation (Resnik, 1997), human plausibility judgements (Spasi´c and Ananiadou, 2004), syntactic disambiguation (Toutanova et al., 2005), word compositionality (McCarthy et al., 2007), textual entailment (Pantel et al., 2007) and pronoun resolution (Bergsma et al., 2008) etc. A direct approach to acquire SP is to extract triples (q, r, a) of predicates, relations, and arguments from a syntactically analyzed corpus, and then conduct maximum likelihood estimation (MLE) on the data. However, this strategy is infeasible for many plausible triples due to data sparsity. For example, given the relation <verb-dobjnoun> in a corpus, we may see plausible triples: eat - {food, cake, apple, banana, candy...} But we may not see plausible and implausible triples such as: eat - {watermelon, ziti, escarole, iPhone...} Then how to use a smooth model to alleviate data sparsity for SP? Random walk models have been successfully applied to alleviate the data sparsity issue on collaborative filtering in recommender systems. Many online businesses, such as Netflix, Amazon.com, and Facebook, have used recommender systems to provide personalized suggestions on the movies, books, or friends that the users may prefer and interested in (Liben-Nowell and Kleinberg, 2007; Yildirim and Krishnamoorthy, 2008). In this paper, we present an extension of using the random walk model to alleviate data sparsity for SP. The main intuition is to aggregate all the transitions from a given predicate to its nearby predicates, and propagate their preferences on arguments as the given predicate’s smoothed argu1169 ment preferences. Our work and contributions are summarized as follows: • We present a framework of random walk approach to SP. It contains four components with flexible configurations. Each component is corresponding to a specific functional operation on the bipartite and monopartite graphs which representing the SP data; • We propose an adjusted preference ranking method to measure SP based on the popularity and association of predicate-argument pairs. It better correlates with human plausibility judgements. It also helps to discover similar predicates more precisely; • We introduce a probability function for random walk based on the predicate distances. It controls the influence of nearby and distant predicates to achieve more accurate results; • We find out that propagate the measured preferences of predicate-argument pairs is more proper and natural for SP smooth. It helps to improve the final performance significantly. We conduct experiments using two sections of the LDC English gigaword corpora as the generalization data. For the pseudo-disambiguation task, we evaluate it on the Penn TreeBank-3 data. Results show that our model outperforms several previous methods. We further investigate the correlations of smoothed scores with human plausibility judgements. Again our method achieves better correlations on two third party data. The remainder of the paper is organized as follows: Section 2 introduces related work. Section 3 briefly formulates the overall framework of our method. Section 4 describes the detailed model configurations, with discussions on their roles and implications. Section 5 provides experiments on both the pseudo-disambiguation task and human plausibility judgements. Finally, Section 6 summarizes the conclusions and future work. 2 Related Work 2.1 WordNet-based Approach Resnik (1996) conducts the pioneer work on corpus-driven SP induction. For a given predicate q, the system firstly computes its distribution of argument semantic classes based on WordNet. Then for a given argument a, the system collects the set of candidate semantic classes which contain the argument a, and ensures they are seen in q. Finally the system picks a semantic class from the candidates with the maximal selectional association score, and defines the score as smoothed score of (q, a). Many researchers have followed the so-called WordNet-based approach to SP. One of the key issues is to induce the set of argument semantic classes that are acceptable by the given predicate. Li and Abe (1998) propose a tree cut model based on minimal description length (MDL) principle for the induction of semantic classes. Clark and Weir (2002) suggest a hypothesis testing method by ascending the noun hierarchy of WordNet. Ciaramita and Johnson (2000) model WordNet as a Bayesian network to solve the “explain away” ambiguity. Beyond induction on argument classes only, Agirre and Martinez (2001) propose a class-toclass model that simultaneously learns SP on both the predicate and argument classes. WordNet-based approach produces human interpretable output, but suffers the poor lexical coverage problem. Gildea and Jurafsky (2002) show that clustering-based approach has better coverage than WordNet-based approach. Brockmann and Lapata (2003) find out that sophisticated WordNet-based methods do not always outperform simple frequency-based methods. 2.2 Distributional Models without WordNet Alternatively, Rooth et al. (1999) propose an EMbased clustering smooth for SP. The key idea is to use the latent clusterings to take the place of WordNet semantic classes. Where the latent clusterings are automatically derived from distributional data based on EM algorithm. Recently, more sophisticated methods are innovated for SP based on topic models, where the latent variables (topics) take the place of semantic classes and distributional clusterings (S´eaghdha, 2010; Ritter et al., 2010). Without introducing semantic classes and latent variables, Keller and Lapata (2003) use the web to obtain frequencies for unseen bigrams smooth. Pantel et al. (2007) apply a collection of rules to filter out incorrect inferences for SP. Specifically, Dagan et al. (1999) introduce a general similaritybased model for word co-occurrence probabilities, which can be interpreted for SP. Similarly, Erk et al. propose an argument-oriented similarity model based on semantic or syntactic vector spaces (Erk, 1170 2007; Erk et al., 2010). They compare several similarity functions and weighting functions in their model. Furthermore, instead of employing various similarity functions, Bergsma et al. (2008) propose a discriminative approach to learn the weights between the predicates, based on the verb-noun co-occurrences and other kinds of features. Random walk model falls into the non-class based distributional approach. Previous literatures have fully studied the selection of distance or similarity functions to find out similar predicates and arguments (Dagan et al., 1999; Erk et al., 2010), or learn the weights between the predicates (Bergsma et al., 2008). Instead, we put effort in following issues: 1) how to measure SP; 2) how to transfer between predicates using random walk; 3) how to propagate the preferences for smooth. Experiments show these issues are important for SP and they should be addressed properly to achieve better results. 3 RSP: A Random Walk Model for SP In this section, we briefly introduce how to address SP using random walk. We propose a framework of RSP with four components (functions). Each of them are flexible to be configured. In summary, Algorithm 1 describes the overall process. Algorithm 1 RSP: Random walk model for SP Require: Init bipartite graph G with raw counts 1: // Ranking on the bipartite graph G; 2: R = Ψ(G); // ranking function 3: // Project R to monopartite graph D 4: D = Φ(R); // distance function 5: // Transform D to stochastic matrix P 6: P = ∆(D); // probability function 7: // Get the convergence eP 8: eP = ∑∞ t=1 (dP)t |(dP)t| = dP(I −dP)−1; 9: return Smoothed bipartite graph eR 10: eR = eP ∗R; // propagation function Bipartite Graph Construction: For a given relation r, the observed predicate-argument pairs can be represented by a bipartite graph G=(X, Y, E). Where X={q1, q2, ..., qm} are the m predicates, and Y ={a1, a2, ..., an} are the n arguments. We initiate the links E with the raw co-occurrence counts of seen predicate-argument pairs in a given generalization data. We represent the graph by an adjacency matrix with rows representing predicates and columns as arguments. For convenience, we use indices i, j to represent predicates qi, qj, and k, l for arguments ak, al. We employ a preference ranking function Ψ to measure the SP between the predicates and arguments. It transforms G to a corresponding bipartite graph R, with links representing the strength of SP. Each row of the adjacency matrix R denotes the predicate vector ⃗qi or ⃗qj. We discuss the selection of Ψ in section 4.1. Ψ := G 7→R (1) Argument Nodes Predicate Nodes can fish food crop flower soil fruit eat cook harvest cultivate irrigate consume harvest consume cook eat cultivate irrigate chicken crop food fruit flower can chicken fish Predicate Projection Argument Projection soil Figure 1: Illustration of (R) the bipartite graph of the verb-dobj-noun relation, (Q) the predicate-projection monopartite graph, and (A) the argument-projection monopartite graph. Monopartite Graph Projection: In order to conduct random walk on the graph, we project the bipartite graph R onto a monopartite graph Q=(X, E) between the predicates, or A=(Y, E) between the arguments (Zhou et al., 2007). Figure 1 illustrates the intuition of the projection. The links in Q represent the indirect connects between the predicates in R. Two predicates are connected in Q if they share at least one common neighbor argument in R. The weight of the links in Q could be set by arbitrary distance measures. We refer D as an instance of the projection Q by a given distance function Φ. Φ := R 7→D (2) Stochastic Walking Strategy: We introduce a probability function ∆to transform the predicate distances D into transition probabilities P. Where P is a stochastic matrix, with each element pij represents the transition probability from predicate qi to qj. Generally speaking, nearby predicates gain higher probabilities to be visited, while distant predicates will be penalized. ∆:= D 7→P (3) 1171 Follow Equation 4, we aggregate over all orders of the transition probabilities P as the final stationary probabilities eP. According to the PerronFrobenius theory, one can verify that it converges to dP(I −dP)−1 when P is non-negative and regular matrix (Li et al., 2009). Where t represents the orders: the length of the path between two nodes in terms of edges. The damp factor d ∈(0, 1), and its value mainly depends on the data sparsity level. Typically d prefers small values such as 0.005. It means higher order transitions are much less reliable than lower orders (LibenNowell and Kleinberg, 2007). eP = ∞ ∑ t=1 (dP)t |(dP)t| = dP(I −dP)−1 (4) Preference Propagation: in Equation 5, we combine the converged transition probabilities eP with the measured preferences R as the propagation function: 1) for a given predicate, firstly it transfers to all nearby predicates with designed probabilities; 2) then it sums over the arguments preferred by these predicates with quantified scores to get smoothed eR. We further describe its configuration details in Section 4.4 and Equation 12 with two propagation modes. eR = eP ∗R (5) 4 Model Configurations 4.1 Preference Ranking: Measure the Selectional Preferences In collaborative filtering, usually there are explicit and scaled user ratings on their item preferences. For instance, a user ratings a movie with a score∈[0,10] on IMDB site. But in SP, the preferences between the predicates and arguments are implicit: their co-occurrence counts follow the power law distribution and vary greatly. Therefore, we employ a ranking function Ψ to measure the SP of the seen predicate-argument pairs. We suppose this could bring at least two benefits: 1) a proper measure on the preferences can make the discovering of nearby predicates with similar preferences to be more accurate; 2) while propagation, we propagate the scored preferences, rather than the raw counts or conditional probabilities, which could be more proper and agree with the nature of SP smooth. We denote SelPref(q, a) as Pr(q, a) for short. SelPref(q, a) = Ψ(q, a) (6) Previous literatures have well studied on various smooth models for SP. However, they vary greatly on the measure of preferences. It is still not clear how to do this best. Lapata et al. investigate the correlations between the co-occurrence counts (CT) c(q, a), or smoothed counts with the human plausibility judgements (Lapata et al., 1999; Lapata et al., 2001). Some introduce conditional probability (CP) p(a|q) for the decision of preference judgements (Chambers and Jurafsky, 2010; Erk et al., 2010; S´eaghdha, 2010). Meanwhile, the pointwise mutual information (MI) is also employed by many researchers to filter out incorrect inferences (Pantel et al., 2007; Bergsma et al., 2008). ΨCT = c(q, a) ΨMI = log p(q, a) p(q)p(a) ΨCP = c(q, a) c(q, ∗) ΨT D = c(q, a)log( m |a|) (7) In this paper, we present an adjusted ranking function (AR) in Equation 8 to measure the SP of seen predicate-argument pairs. Intuitively, it measures the preferences by combining both the popularity and association, with parameters control the uncertainty of the trade-off between the two. We define the popularity as the joint probability p(q, a) based on MLE, and the association as MI. This is potentially similar to the process of human plausibility judgements. One may judge the plausibility of a predicate-argument collocation from two sides: 1) if it has enough evidences and commonly to be seen; 2) if it has strong association according to the cognition based on kinds of background knowledge. This metric is also similar to the TF-IDF (TD) used in information retrieval. ΨAR(q, a) = p(q, a)α1 ( p(q, a) p(q)p(a) )α2 s.t. α1, α2 ∈[0, 1] (8) We verify if a metric is better by two tasks: 1) how well it correlates with human plausibility judgements; 2) how well it helps with the smooth inference to disambiguate plausible and implausible instances. We conduct empirical experiments on these issues in Section 5.3 and Section 5.4. 4.2 Distance Function: Projection of the Monopartite Graph In Equation 9, the distance function Φ is used to discover nearby predicates with distance dij. It weights the links on the monopartite graph Q. It 1172 guides the walker to transfer between predicates. We calculate Φ based on the vectors ⃗qi, ⃗qj represented by the measured preferences in R. dij = Φ(⃗qi, ⃗qj) (9) Where Φ can be distance functions such as Euclidean (norm) distance or Kullback-Leibler divergence (KL) etc., or one minus the similarity functions such as Jaccard and Cosine etc. The selection of distributional functions has been fully studied by previous work (Lee, 1999; Erk et al., 2010). In this paper, we do not focus on this issue due to page limits. We simply use the Cosine function: Φcosine(⃗qi, ⃗qj) = 1 − ⃗qi · ⃗qj ∥⃗qi∥∥⃗qj∥ (10) 4.3 Probability Function: the Walk Strategy We define the probability function ∆as Equation 11. Where the transition probability p(qj|qi) in P is defined as a function of the distance dij with a parameter δ. Intuitively, it means in a given walk step, a predicate qj which is far away from qi will get much less probability to be visited, and qi has high probabilities to start walk from itself and its nearby predicates to pursue good precision. Once we get the transition matrix P, we can compute eP according to Equation 4. p(qj|qi) = ∆(dij) = (1 −dij)δ Z(qi) s.t. δ ≥0, dij ∈[0, 1] (11) Where the parameter δ is used to control the balance of nearby and distant predicates. Z(qi) is the normalize factor. Typically, δ around 2 can produce good enough results in most cases. We verify the settings of δ in section 5.3.2. 4.4 Propagation Function The propagation function in Equation 5 is represented by the matrix form. It can be expanded and rewritten as Equation 12. Where ep(qj|qi) is the converged transition probability from predicate qi to qj. Pr(ak, qj) is the measured preference of predicate qj with argument ak. f Pr(ak, qi) = m ∑ j=1 ep(qj|qi) · Pr(ak, qj) (12) We employ two propagation modes (PropMode) for the preference propagation function. One is ’CP’ mode. In this mode, we always set Pr(q, a) as the conditional probability p(a|q) for the propagation function, despite what Ψ is used for the distance function. This mode is similar to previous methods (Dagan et al., 1999; Keller and Lapata, 2003; Bergsma et al., 2008). The other is ’PP’ mode. We set ranking function Ψ=Pr(q, a) always to be the same in both the distance function and the propagation function. That means what we propagated is the designed and scored preferences. This could be more proper and agree with the nature of SP smooth. We show the improvement of this extension in section 5.3.1. 5 Experiments 5.1 Data Set Generalization Data: We parsed the Agence France-Presse (AFP) and New York Times (NYT) sections of the LDC English Gigaword corpora (Parker et al., 2011), each from year 2001-2010. The parser is provided by the Stanford CoreNLP package1. We filter out all tokens containing non-alphabetic characters, collect the <verb-dobjnoun > triples from the syntactically analyzed data. Predicates (verbs) whose frequency lower than 30 and arguments (noun headwords) whose frequency less than 5 are excluded out. No other filters have been done. The resulting data consist of: • AFP: 26, 118, 892 verb-dobj-noun observations with 1, 918, 275 distinct triples, totally 4, 771 predicates and 44, 777 arguments. • NYT: 29, 149, 574 verb-dobj-noun observations with 3, 281, 391 distinct triples, totally 5, 782 predicates and 57, 480 arguments. Test Data: For pseudo-disambiguation, we employ Penn TreeBank-3 (PTB) as the test data (Marcus et al., 1999)2. We collect the 36, 400 manually annotated verb-dobj-noun dependencies (with 23, 553 distinct ones) from PTB. We keep dependencies whose predicates and arguments are seen in the generalization data. We randomly select 20% of these dependencies as the test set. We split the test set equally into two parts: one as the development set and the other as the final test set. Human Plausibility Judgements Data: We employ two human plausibility judgements data 1http://nlp.stanford.edu/software/corenlp.shtml 2PTB includes 2, 499 stories from the Wall Street Journal (WSJ). It is different with our two generalization data. 1173 for the correlation evaluation. In each they collect a set of predicate-argument pairs, and annotate with two kinds of human ratings: one for an argument takes the role as the patient of a predicate, and the other for the argument as the agent. The rating values are between 1 and 7: e.g. they assign hunter-subj-shoot with a rating 6.9 but 2.8 for shoot-dobj-hunter. • PBP: Pad´o et al. (2007) develop a set of human plausibility ratings on the basis of the Penn TreeBank and FrameNet respectively. We refer PBP as their 212 patient ratings from the Penn TreeBank. • MRP: This data are originally contributed by McRae et al. (1998). We use all their 723 patient-nn ratings. Without explicit explanation, we remove all the selected PTB tests and human plausibility pairs from AFP and NYT to treat them unseen. 5.2 Comparison Methods Since RSP falls into the unsupervised distributional approach, we compare it with previous similarity-based methods and unsupervised generative topic model 3. Erk et al. (Erk, 2007; Erk et al., 2010) are the pioneers to address SP using similarity-based method. For a given (q, a) in relation r, the model sums over the similarities between a and the seen headwords a′ ∈Seen(q, r). They investigated several similarity functions sim(a, a′) such as Jaccard, Cosine, Lin, and nGCM etc., and different weighting functions wtq,r(a′). S(q, r, a) = ∑ a′ wtq,r(a′) Zq,r · sim(a, a′) (13) For comparison, we suppose the primary corpus and generalization corpus in their model to be the same. We set the similarity function of their model as nGCM, use both the FREQ and DISCR weighting functions. The vector space is in SYNPRIMARY setting with 2, 000 basis elements. Dagan et al. (1999) propose state-of-the-art similarity based model for word co-occurrence probabilities. Though it is not intended for SP, but it can be interpreted and rewritten for SP as: Pr(a|q) = ∑ q′∈Simset(q) sim(q, q′) Z(q) p(a|q′) (14) 3The implementation of RSP and listed previous methods are available at https://github.com/ZhenhuaTian/RSP They use the k-closest nearbys as Simset(q), with a parameter β to revise the similarity function. For comparison, we use the Jensen-Shannon divergence (Lin, 1991) which shows the best performance in their work as sim(q, q′), and optimize the settings of k and β in our experiments. LDA-SP: Another kind of sophisticated unsupervised approaches for SP are latent variable models based on Latent Dirichlet Allocation (LDA). ´O S´eaghdha (2010) applies topic models for the SP induction with three variations: LDA, Rooth-LDA, and Dual-LDA; Ritter et al. (2010) focus on inferring latent topics and their distributions over multiple arguments and relations (e.g., the subject and direct object of a verb). In this work, we compare with ´O S´eaghdha’s original LDA approach to SP. We use the Matlab Topic Modeling Toolbox4 for the inference of latent topics. The hyper parameters are set as suggested α=50/T and β=200/n, where T is the number of topics and n is the number of arguments. We test T=100, 200, 300, each with 1, 000 iterations of Gibbs sampling. 5.3 Pseudo-Disambiguation Pseudo-disambiguation has been used for SP evaluation by many researchers (Rooth et al., 1999; Erk, 2007; Bergsma et al., 2008; Chambers and Jurafsky, 2010; Ritter et al., 2010). First the system removes a portion of seen predicate-argument pairs from the generalization data to treat them as unseen positive tests (q, a+). Then it introduces confounder selection to create a pseudo negative test (q, a−) for each positive (q, a+). Finally it evaluates a SP model by how well the model disambiguates these positive and negative tests. Confounder Selection: for a given (q, a+), the system selects an argument a′ from the argument vocabulary. Then by ensure (q, a′) is unseen in the generalization data, it treats a′ as pseudo a−. This process guarantees that (q, a−) to be negative in real case with very high probability. Previous work have made advances on confounder selection with random, bucket and nearest confounders. Random confounder (RND) most closes to the realistic case; While nearest confounder (NER) is reproducible and it avoids frequency bias (Chambers and Jurafsky, 2010). In this work, we employ both RND and NER confounders: 1) for RND, we randomly select 4psiexp.ss.uci.edu/research/programs data/toolbox.htm 1174 confounders according to the occurrence probability of arguments. We sample confounders on both the development and final test data with 100 iterations. 2) for NER, firstly we sort the arguments by their frequency. Then we select the nearest confounders with two iterations. One iteration selects the confounder whose frequency is more than or equal to a+, and the other iteration with frequency lower than or equal to a+. Evaluation Metric: we evaluate performance on both the pairwise and pointwise settings: 1) On pairwise setting, we combine corresponding (q, a+, a−) together as test instances. The performance is evaluated based on the accuracy (ACC) metric. It computes the portion of test instances (q, a+, a−) which correctly predicted by the smooth model with score(q, a+) > score(q, a−). We weight each instance equally for macroACC, and weight each by the frequency of the positive pair (q, a+) for microACC. 2) On pointwise setting, we use each positive test (q, a+) or negative test (q, a−) as test instances independently. We treat it as a binary classification task, and evaluate using the standard area-under-the-curve (AUC) metric. This metric is firstly employed for the SP evaluation by Ritter et al (2010). For macroAUC, we weight each instance equally; for microAUC, we weight each by its argument frequency (Bergsma et al., 2008). Parameters Tuning: The parameters are tuned on the PTB development set, using AFP as the generalization data. We report the overall performance on the final test set. While using NYT as the generalization data, we hold the same parameter settings as AFP to ensure the results are robust. Note that indeed the parameter settings would vary among different generalization and test data. 5.3.1 Verify Ranking Function and Propagation Method This experiment is conducted on the PTB development set with RND confounders. We use AFP and NYT as the generalization data. For comparison, we set the distance function Φ as Cosine, with default d=0.005, and δ=1. In Table 1, the evaluation metric is Accuracy. The first 4 rows are the results of ’CP’ PropMode, and the latter 3 rows are the ’PP’ PropMode. With respect to the ranking function Ψ, CP performs the worst as it considers only the popularity rather than association. The heavy bias on frequent predicates and arguments has two major drawbacks: a) The computation of predicate distances would rely much more on frequent arguments, rather than those arguments they preferred; b) While propagation, it may bias more on frequent arguments, too. Even these frequent arguments are less preferred and not proper to be propagated. Crit. AFP NYT macro micro macro micro ΨCP 71.7 76.7 78.2 81.2 ΨMI 70.9 75.8 79.1 81.8 ΨT D 73.4 78.2 80.9 83.4 ΨAR 72.9 77.8 81.0 83.5 ΨMI 76.8 80.6 81.9 83.8 ΨT D 74.4 79.1 81.8 84.2 ΨAR 82.5 85.2 87.7 88.6 Table 1: Comparing different ranking functions. For MI, it biases infrequent arguments with strong association, without regarding to the popular arguments with more evidences. Furthermore, the generalization data is automatically parsed and kind of noisy, especially on infrequent predicates and arguments. The noises could yield unreliable estimations and decrease the performance. For TD, it outperforms MI method on ’CP’ PropMode, but it not always outperforms MI on ’PP’ PropMode. It is no surprise to find out the adjusted ranking AR achieves better results on both AFP and NYT data, with α1=0.2 and α2=0.6. Finally, it shows the ’PP’ mode, which propagating the designed preference scores, gains significantly better performance as discussed in Section 4.4. 5.3.2 Verify δ of the Probability Function This experiment is conducted on the PTB development tests with both RND and NER confounders. The generalization data is AFP. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 76 78 80 82 84 86 88 90 accuracy (%) delta RND macro accuracy RND micro accuracy NER macro accuracy NER micro accuracy Figure 2: Performance variation on different δ. 1175 Criterion AFP NYT RND NER RND NER macro micro macro micro macro micro macro micro Erk et al. FREQ 73.7 73.6 73.9 73.6 68.3 68.4 63.8 63.0 Erk et al.DISCR 76.0 78.3 79.1 78.1 83.3 84.2 82.4 82.6 Dagan et al. 80.6 82.8 84.7 85.0 87.0 87.6 86.9 87.3 LDA-SP 82.0 83.5 83.7 82.9 89.1 89.0 87.9 87.8 RSPnaive 72.6 76.4 79.4 81.1 78.5 80.4 74.8 78.0 +Rank 74.0 77.7 83.5 85.2 81.4 83.1 84.5 86.9 +Rank+PP 83.5 85.2 87.2 87.0 88.2 88.2 88.0 88.3 +Rank+PP+Delta 86.2 87.3 88.4 88.1 90.6 90.1 91.1 89.3 Table 2: Pseudo-disambiguation results of different smooth models. Macro and micro Accuracy. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False Positive (FP) True Positive (TP) Erk et al. macroAUC=0.72 Dagan et al. macroAUC=0.80 LDA−SP macroAUC=0.77 RSP−ALL macroAUC=0.84 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False Positive (FP) True Positive (TP) Erk et al. microAUC=0.62 Dagan et al. microAUC=0.83 LDA−SP microAUC=0.73 RSP−ALL microAUC=0.89 Figure 3: Marco and micro ROC curves of different smooth models. We set the ranking function Ψ as AR (with tuned α1=0.2 and α2=0.6), the distance function Φ as Cosine, default d=0.005, and we restrict δ ∈ [0.5, 4]. Figure 2 shows δ has significant impact on the performance. Starting from δ=0.5, the system gains better performance while δ increasing. It achieves good results around δ=2. This means for a given predicate, the penalty on its distant predicates helps to get more accurate smooth. The performance will drop if δ becomes too big. This means closest predicates are useful for smooth. It it not better to penalize them heavily. 5.3.3 Overall Performance Finally we compare the overall performance of different models. We report the results on the PTB final test set, with RND and NER confounders. Table 2 shows the overall performance on Accuracy metric. Among previous methods in the first 4 rows, LDA-SP performs the best in most cases. In the last 4 rows, RSPnaive means both the ranking function and PropMode are set as ’CP’ and δ=1. This configuration yields poor performance. Iteratively, by employing the adjusted ranking function, smoothing with preference propagation method, and revising the probability function with the parameter δ, RSP outperforms all previous methods. The parameter settings of RSPAll are α1=0.2, α2=0.6, δ=1.75 and d=0.005. Figure 3 show the macro (left) and micro (right) receiver-operating-characteristic (ROC) curves of different models, using AFP as the generalization data and RND confounders. For each kind of previous methods, we show the best AUC they achieved. RASP-All still performs the best on the terms of AUC metric, achieving macroAUC at 84% and microAUC at 89%. We also verified the AUC metric using NYT as the generalization data. The results are similar to the AFP data. It is also interesting to find out that the ACC metric is not always bring into correspondence with the AUC metric. The difference mainly raise on the pointwise and pairwise test settings of pseudodisambiguation. 5.4 Human Plausibility Judgements We conduct empirical studies on the correlations between different preference ranking func1176 Criterion AFP NYT Spearman’s ρ Kendall’s τ Spearman’s ρ Kendall’s τ PBP MRP PBP MRP PBP MRP PBP MRP CT 0.49 0.36 0.37 0.28 0.54 0.44 0.41 0.34 CP 0.47 0.39 0.35 0.30 0.51 0.48 0.39 0.37 MI 0.56 0.39 0.43 0.31 0.54 0.49 0.41 0.38 TD 0.53 0.36 0.39 0.28 0.56 0.45 0.42 0.34 AR 0.58 0.40 0.44 0.31 0.58 0.50 0.44 0.39 Erk et al. FREQ 0.30 0.08 0.22 0.06 0.25 0.09 0.18 0.06 Erk et al.DISCR 0.06 0.21 0.04 0.15 0.16 0.23 0.11 0.16 Dagan et al. 0.32 0.24 0.24 0.18 0.46 0.29 0.34 0.21 LDA-SP 0.31 0.32 0.23 0.23 0.38 0.38 0.28 0.28 LDA-SP+Bayes 0.39 0.25 0.30 0.18 0.40 0.32 0.30 0.23 RSP-All 0.46 0.31 0.34 0.23 0.53 0.38 0.40 0.28 Table 3: Correlation results on the human plausibility judgements data. tions and human ratings. Follow Lapata et al. (2001), we first collect the co-occurrence counts of predicate-argument pairs in the human plausibility data from AFP and NYT (before removing them as unseen pairs). Then we score them with different ranking functions (described in Section 4.1) based on MLE. Inspired by Erk et al. (2010), we do not suppose linear correlations between the estimated scores and human ratings. We use the Spearman’s ρ and Kendal’s τ rank correlation coefficient. We also compare the correlations between the smoothed scores of different models with human ratings. With respect to upper bounds, Pad´o et al. (2007) suggest that the typical agreement of human participants is around a correlation of 0.7 on their plausibility data. We hold that automatic models of plausibility can not be expected to surpass this upper bound. In Table 3, all coefficients are verified at significant level p<0.01. The first 5 rows are the correlations between the preference ranking functions and human ratings based on MLE. On both the PBP and MRP data, the proposed AR metric better correlates with human ratings than others, with α2 >0.5 and α1 around [0.2, 0.35]. The latter 6 rows are the results of smooth models. It shows LDASP performs good correlation with human ratings, where LDA-SP+Bayes refers to the Bayes prediction method of Ritter et al. (2010). RSP model gains the best correlation on the two plausibility data in most cases, where the parameter settings are the same as pseudo-disambiguation. 6 Conclusions and Future Work In this work we present an random walk approach to SP. Experiments show it is efficient and effective to address data sparsity for SP. It is also flexible to be applied to new data. We find out that a proper measure on SP between the predicates and arguments is important for SP. It helps with the discovering of nearby predicates and it makes the preference propagation to be more accurate. Another issue is that it is not good enough to directly applies the similarity or distance functions for smooth. Potential future work including but not limited to follows: investigate argument-oriented and personalized random walk, extend the model in heterogenous network with multiple link types, discover soft clusters using random walk for semantic induction, and combine it with discriminative learning approach etc. Acknowledgments The research is supported in part by the National High Technology Research and Development Program 863 of China under Grant No.2012AA011003; Key Projects in the National Science and Technology Pillar Program under Grant No.2011BAK08B02; Chinese Government Graduate Student Overseas Study Program sponsored by the China Scholarship Council (CSC). We also gratefully acknowledge the anonymous reviewers for their helpful comments. 1177 References Eneko Agirre and David Martinez. 2001. Learning class-to-class selectional preferences. In Proceedings of the 2001 workshop on Computational Natural Language Learning. Shane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In EMNLP. Carsten Brockmann and Mirella Lapata. 2003. Evaluating and combining approaches to selectional preference acquisition. In EACL. Nathanael Chambers and Dan Jurafsky. 2010. Improving the use of pseudo-words for evaluating selectional preferences. In ACL. Massimiliano Ciaramita and Mark Johnson. 2000. Explaining away ambiguity: Learning verb selectional preference with bayesian networks. In COLING. Stephen Clark and David J. Weir. 2002. Class-based probability estimation using a semantic hierarchy. Computational Linguistics, 28(2):187–206. Ido Dagan, Lillian Lee, and Fernando C. N. Pereira. 1999. Similarity-Based Models of Word Cooccurrence Probabilities. Machine Learning, 34:43–69. Katrin Erk, Sebastian Pad´o, and Ulrike Pad´o. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36(4):723–763. Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In ACL. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Jerrold J. Katz and Jerry A. Fodor. 1963. The structure of a semantic theory. Language, 39(2):170–210. Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3):459–484. Maria Lapata, Scott McDonald, and Frank Keller. 1999. Determinants of adjective-noun plausibility. In EACL, pages 30–36. Association for Computational Linguistics. Maria Lapata, Frank Keller, and Scott McDonald. 2001. Evaluating smoothing algorithms against plausibility judgements. In ACL, pages 354–361. Association for Computational Linguistics. Lillian Lee. 1999. Measures of distributional similarity. In ACL, pages 25–32, Stroudsburg, PA, USA. Association for Computational Linguistics. Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the mdl principle. Computational linguistics, 24(2):217–244. Ming Li, Benjamin M Dias, Ian Jarman, Wael ElDeredy, and Paulo JG Lisboa. 2009. Grocery shopping recommendations based on basket-sensitive random walk. In SIGKDD, pages 1215–1224. ACM. David Liben-Nowell and Jon Kleinberg. 2007. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019–1031. Jianhua Lin. 1991. Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1):145–151. Mitchell P. Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor. 1999. Treebank3. Diana McCarthy, Sriram Venkatapathy, and Aravind K. Joshi. 2007. Detecting compositionality of verbobject combinations using selectional preferences. In EMNLP-CoNLL. Ken McRae, Michael J. Spivey-Knowltonb, and Michael K. Tanenhausc. 1998. Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension. Journal of Memory and Language, 38(3):283–312. Sebastian Pad´o, Ulrike Pad´o, and Katrin Erk. 2007. Flexible, corpus-based modelling of human plausibility judgements. In EMNLP/CoNLL, volume 7. Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard Hovy. 2007. Isp: Learning inferential selectional preferences. In NAACL-HLT. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edition. Philip Resnik. 1993. Selection and information: a class-based approach to lexical relationships. IRCS Technical Reports Series. Philip Resnik. 1996. Selectional constraints: An information-theoretic model and its computational realization. Cognition, 61(1):127–159. Philip Resnik. 1997. Selectional preference and sense disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How. Washington, DC. Alan Ritter, Mausam, and Oren Etzioni. 2010. A latent dirichlet allocation method for selectional preferences. In ACL. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via em-based clustering. In ACL. Diarmuid ´O S´eaghdha. 2010. Latent variable models of selectional preference. In ACL. 1178 Irena Spasi´c and Sophia Ananiadou. 2004. Using automatically learnt verb selectional preferences for classification of biomedical terms. Journal of Biomedical Informatics, 37(6):483–497. Kristina Toutanova, Christopher D. Manning, Dan Flickinger, and Stephan Oepen. 2005. Stochastic hpsg parse disambiguation using the redwoods corpus. Research on Language & Computation, 3(1):83–105. Yorick Wilks. 1973. Preference semantics. Technical report, DTIC Document. Hilmi Yildirim and Mukkai S. Krishnamoorthy. 2008. A random walk method for alleviating the sparsity problem in collaborative filtering. In Proceedings of the 2008 ACM conference on Recommender systems, pages 131–138. ACM. Tao Zhou, Jie Renan, Mat´uˇs Medo, and Yi-Cheng Zhang. 2007. Bipartite network projection and personal recommendation. Physical Review E, 76(4):046115. 1179
2013
115
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1180–1189, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics ImpAr: A Deterministic Algorithm for Implicit Semantic Role Labelling Egoitz Laparra IXA Group University of the Basque Country San Sebastian, Spain [email protected] German Rigau IXA Group University of the Basque Country San Sebastian, Spain [email protected] Abstract This paper presents a novel deterministic algorithm for implicit Semantic Role Labeling. The system exploits a very simple but relevant discursive property, the argument coherence over different instances of a predicate. The algorithm solves the implicit arguments sequentially, exploiting not only explicit but also the implicit arguments previously solved. In addition, we empirically demonstrate that the algorithm obtains very competitive and robust performances with respect to supervised approaches that require large amounts of costly training data. 1 Introduction Traditionally, Semantic Role Labeling (SRL) systems have focused in searching the fillers of those explicit roles appearing within sentence boundaries (Gildea and Jurafsky, 2000, 2002; Carreras and M`arquez, 2005; Surdeanu et al., 2008; Hajiˇc et al., 2009). These systems limited their searchspace to the elements that share a syntactical relation with the predicate. However, when the participants of a predicate are implicit this approach obtains incomplete predicative structures with null arguments. The following example includes the gold-standard annotations for a traditional SRL process: (1) [arg0 The network] had been expected to have [np losses] [arg1 of as much as $20 million] [arg3 on baseball this year]. It isn’t clear how much those [np losses] may widen because of the short Series. The previous analysis includes annotations for the nominal predicate loss based on the NomBank structure (Meyers et al., 2004). In this case the annotator identifies, in the first sentence, the arguments arg0, the entity losing something, arg1, the thing lost, and arg3, the source of that loss. However, in the second sentence there is another instance of the same predicate, loss, but in this case no argument has been associated with it. Traditional SRL systems facing this type of examples are not able to fill the arguments of a predicate because their fillers are not in the same sentence of the predicate. Moreover, these systems also let unfilled arguments occurring in the same sentence, like in the following example: (2) Quest Medical Inc said it adopted [arg1 a shareholders’ rights] [np plan] in which rights to purchase shares of common stock will be distributed as a dividend to shareholders of record as of Oct 23. For the predicate plan in the previous sentence, a traditional SRL process only returns the filler for the argument arg1, the theme of the plan. However, in both examples, a reader could easily infer the missing arguments from the surrounding context of the predicate, and determine that in (1) both instances of the predicate share the same arguments and in (2) the missing argument corresponds to the subject of the verb that dominates the predicate, Quest Medical Inc. Obviously, this additional annotations could contribute positively to its semantic analysis. In fact, Gerber and Chai (2010) pointed out that implicit arguments can increase the coverage of argument structures in NomBank by 71%. However, current automatic systems require large amounts of manually annotated training data for each predicate. The effort required for this manual annotation explains the absence of generally applicable tools. This problem has become a main concern for many NLP tasks. This fact explains a new trend to develop accurate unsupervised systems that exploit simple but robust linguistic principles (Raghunathan et al., 2010). In this work, we study the coherence of the predicate and argument realization in discourse. In particular, we have followed a similar approach to 1180 the one proposed by Dahl et al. (1987) who filled the arguments of anaphoric mentions of nominal predicates using previous mentions of the same predicate. We present an extension of this idea assuming that in a coherent document the different ocurrences of a predicate, including both verbal and nominal forms, tend to be mentions of the same event, and thus, they share the same argument fillers. Following this approach, we have developed a deterministic algorithm that obtains competitive results with respect to supervised methods. That is, our system can be applied to any predicate without training data. The main contributions of this work are the following: • We empirically prove that there exists a strong discourse relationship between the implicit and explicit argument fillers of the same predicates. • We propose a deterministic approach that exploits this discoursive property in order to obtain the fillers of implicit arguments. • We adapt to the implicit SRL problem a classic algorithm for pronoun resolution. • We develop a robust algorithm, ImpAr, that obtains very competitive results with respect to existing supervised systems. We release an open source prototype implementing this algorithm1. The paper is structured as follows. Section 2 discusses the related work. Section 3 presents in detail the data used in our experiments. Section 4 describes our algorithm for implicit argument resolution. Section 5 presents some experiments we have carried out to test the algorithm. Section 6 discusses the results obtained. Finally, section 7 offers some concluding remarks and presents some future research lines. 2 Related Work The first attempt for the automatic annotation of implicit semantic roles was proposed by Palmer et al. (1986). This work applied selectional restrictions together with coreference chains, in a very specific domain. In a similar approach, Whittemore et al. (1991) also attempted to solve implicit 1http://adimen.si.ehu.es/web/ImpAr arguments using some manually described semantic constraints for each thematic role they tried to cover. Another early approach was presented by Tetreault (2002). Studying another specific domain, they obtained some probabilistic relations between some roles. These early works agree that the problem is, in fact, a special case of anaphora or coreference resolution. Recently, the task has been taken up again around two different proposals. On the one hand, Ruppenhofer et al. (2010) presented a task in SemEval-2010 that included an implicit argument identification challenge based on FrameNet (Baker et al., 1998). The corpus for this task consisted in some novel chapters. They covered a wide variety of nominal and verbal predicates, each one having only a small number of instances. Only two systems were presented for this subtask obtaining quite poor results (F1 below 0,02). VENSES++ (Tonelli and Delmonte, 2010) applied a rule based anaphora resolution procedure and semantic similarity between candidates and thematic roles using WordNet (Fellbaum, 1998). The system was tuned in (Tonelli and Delmonte, 2011) improving slightly its performance. SEMAFOR (Chen et al., 2010) is a supervised system that extended an existing semantic role labeler to enlarge the search window to other sentences, replacing the features defined for regular arguments with two new semantic features. Although this system obtained the best performance in the task, data sparseness strongly affected the results. Besides the two systems presented to the task, some other systems have used the same dataset and evaluation metrics. Ruppenhofer et al. (2011), Laparra and Rigau (2012), Gorinski et al. (2013) and Laparra and Rigau (2013) explore alternative linguistic and semantic strategies. These works obtained significant gains over previous approaches. Silberer and Frank (2012) adapted an entity-based coreference resolution model to extend automatically the training corpus. Exploiting this additional data, their system was able to improve previous results. Following this approach Moor et al. (2013) present a corpus of predicate-specific annotations for verbs in the FrameNet paradigm that are aligned with PropBank and VerbNet. On the other hand, Gerber and Chai (2010, 2012) studied the implicit argument resolution on NomBank. They uses a set of syntactic, semantic and coreferential features to train a logistic regres1181 sion classifier. Unlike the dataset from SemEval2010 (Ruppenhofer et al., 2010), in this work the authors focused on a small set of ten predicates. But for those predicates, they annotated a large amount of instances in the documents from the Wall Street Journal that were already annotated for PropBank (Palmer et al., 2005) and NomBank. This allowed them to avoid the sparseness problems and generalize properly from the training set. The results of this system were far better than those obtained by the systems that faced the SemEval-2010 dataset. This works represent the deepest study so far of the features that characterizes the implicit arguments 2. However, many of the most important features are lexically dependent on the predicate and cannot been generalized. Thus, specific annotations are required for each new predicate to be analyzed. All the works presented in this section agree that implicit arguments must be modeled as a particular case of coreference together with features that include lexical-semantic information, to build selectional preferences. Another common point is the fact that these works try to solve each instance of the implicit arguments independently, without taking into account the previous realizations of the same implicit argument in the document. We propose that these realizations, together with the explicit ones, must maintain a certain coherence along the document and, in consequence, the filler of an argument remains the same along the following instances of that argument until a stronger evidence indicates a change. We also propose that this feature can be exploited independently from the predicate. 3 Datasets In our experiments, we have focused on the dataset developed in Gerber and Chai (2010, 2012). This dataset (hereinafter BNB which stands for ”Beyond NomBank”) extends existing predicate annotations for NomBank and ProbBank. BNB presented the first annotation work of implicit arguments based on PropBank and NomBank frames. This annotation was an extension of the standard training, development and testing sections of Penn TreeBank that have been typically used for SRL evaluation and were already annotated with PropBank and NomBank predicate 2Gerber and Chai (2012) includes a set of 81 different features. structures. The authors selected a limited set of predicates. These predicates are all nominalizations of other verbal predicates, without sense ambiguity, that appear frequently in the corpus and tend to have implicit arguments associated with their instances. These constraints allowed them to model enough occurrences of each implicit argument in order to cover adequately all the possible cases appearing in a test document. For each missing argument position they went over all the preceding sentences and annotated all mentions of the filler of that argument. In tables 3 and 4 we show the list of predicates and the resulting figures of this annotation. In this work we also use the corpus provided for the CoNLL-2008 task. These corpora cover the same BNB documents and include annotated predictions for syntactic dependencies and SuperSense labels as semantic tags. Unlike Gerber and Chai (2010, 2012) we do not use the constituent analysis from the Penn TreeBank. 4 ImpAr algorithm 4.1 Discoursive coherence of predicates Exploring the training dataset of BNB, we observed a very strong discourse effect on the implicit and explicit argument fillers of the predicates. That is, if several instances of the same predicate appear in a well-written discourse, it is very likely that they maintain the same argument fillers. This property holds when joining the different parts-of-speech of the predicates (nominal or verbal) and the explicit or implicit realizations of the argument fillers. For instance, we observed that 46% of all implicit arguments share the same filler with the previous instance of the same predicate while only 14% of them have a different filler. The remaining 40% of all implicit arguments correspond to first occurrences of their predicates. That is, these fillers can not be recovered from previous instances of their predicates. The rationale behind this phenomena seems to be simple. When referring to different aspects of the same event, the writer of a coherent document does not repeat redundant information. They refer to previous predicate instances assuming that the reader already recalls the involved participants. That is, the filler of the different instances of a predicate argument maintain a certain discourse coherence. For instance, in example (1), all the argument positions of the second occurrence of the 1182 predicate loss are missing, but they can be easily inferred from the previous instance of the same predicate. (1) [arg0 The network] had been expected to have [np losses] [arg1 of as much as $20 million] [arg3 on baseball this year]. It isn’t clear how much those [np losses] may widen because of the short Series. Therefore, we propose to exploit this property in order to capture correctly how the fillers of all predicate arguments evolve through a document. Our algorithm, ImpAr, processes the documents sentence by sentence, assuming that sequences of the same predicate (in its nominal or verbal form) share the same argument fillers (explicit or implicit)3. Thus, for every core argument argn of a predicate, ImpAr stores its previous known filler as a default value. If the arguments of a predicate are explicit, they always replace default fillers previously captured. When there is no antecedent for a particular implicit argument argn, the algorithm tries to find in the surrounding context which participant is the most likely to be the filler according to some salience factors (see Section 4.2). For the following instances, without an explicit filler for a particular argument position, the algorithm repeats the same selection process and compares the new implicit candidate with the default one. That is, the default implicit argument of a predicate with no antecedent can change every time the algorithm finds a filler with a greater salience. A damping factor is applied to reduce the salience of distant predicates. 4.2 Filling arguments without explicit antecedents Filling the implicit arguments of a predicate has been identified as a particular case of coreference, very close to pronoun resolution (Silberer and Frank, 2012). Consequently, for those implicit arguments that have not explicit antecedents, we propose an adaptation of a classic algorithm for deterministic pronoun resolution. This component of our algorithm follows the RAP approach (Lappin and Leass, 1994). When our algorithm needs to fill an implicit predicate argument without an explicit antecedent it considers a set of candidates within a window formed by the sentence of the predicate and the two previous sentences. Then, the algorithm performs the following steps: 3Note that the algorithm could also consider sequences of closely related predicates. 1. Apply two constraints to the candidate list: (a) All candidates that are already explicit arguments of the predicate are ruled out. (b) All candidates commanded by the predicate in the dependency tree are ruled out. 2. Select those candidates that are semantically consistent with the semantic category of the implicit argument. 3. Assign a salience score to each candidate. 4. Sort the candidates by their proximity to the predicate of the implicit argument. 5. Select the candidate with the highest salience value. As a result, the candidate with the highest salience value is selected as the filler of the implicit argument. Thus, this filler with its corresponding salience weight will be also considered in subsequent instances of the same predicate. Now, we explain each step in more detail using example (2). In this example, arg0 is missing for the predicate plan: (2) Quest Medical Inc said it adopted [arg1 a shareholders’ rights] [np plan] in which rights to purchase shares of common stock will be distributed as a dividend to shareholders of record as of Oct 23. Filtering. In the first step, the algorithm filters out the candidates that are actual explicit arguments of the predicate or have a syntactic dependency with the predicate, and therefore, they are in the search space of a traditional SRL system. In our example, the filtering process would remove [a shareholders’ rights] because it is already the explicit argument arg1, and [in which rights to purchase shares of common stock will be distributed as a dividend to shareholders of record as of Oct 23] because it is syntactically commanded by the predicate plan. Semantic consistency. To determine the semantic coherence between the potential candidates and a predicate argument argn, we have exploited the selectional preferences in the same way as in previous SRL and implicit argument resolution works. First, we have designed a list of very general semantic categories. Second, we have semi-automatically assigned one of them to every predicate argument argn in PropBank and NomBank. For this, we have used the semantic annotation provided by the training documents of the CoNLL-2008 dataset. This annotation was performed automatically using the SuperSenseTagger (Ciaramita and Altun, 2006) and includes 1183 named-entities and WordNet Super-Senses4. We have also defined a mapping between the semantic classes provided by the SuperSenseTagger and our seven semantic categories (see Table 1 for more details). Then, we have acquired the most common categories of each predicate argument argn. ImpAr algorithm also uses the SuperSenseTagger over the documents to be processed from BNB to check if the candidate belongs to the expected semantic category of the implicit argument to be filled. Following the example above, [Quest Medical Inc] is tagged as an ORGANIZATION by the SuperSenseTagger. Therefore, it belongs to our semantic category COGNITIVE. As the semantic category for the implicit argument arg0 for the predicate plan has been recognized to be also COGNITIVE, [Quest Medical Inc] remains in the list of candidates as a possible filler. Semantic category Name-entities Super-Senses COGNITIVE PERSON noun.person ORGANIZATION noun.group ANIMAL noun.animal ... ... TANGIBLE PRODUCT noun.artifact SUBSTANCE noun.object ... ... EVENTIVE GAME noun.act DISEASE noun.communication ... ... RELATIVE noun.shape noun.attribute ... LOCATIVE LOCATION noun.location TIME DATE noun.time MESURABLE QUANTITY noun.quantity PERCENT ... Table 1: Links between the semantic categories and some name-entities and super-senses. Salience weighting. In this process, the algorithm assigns to each candidate a set of salience factors that scores its prominence. The sentence recency factor prioritizes the candidates that occur close to the same sentence of the predicate. The subject, direct object, indirect object and nonadverbial factors weight the salience of the candidate depending on the syntactic role they belong to. Additionally, the head of these syntactic roles are prioritized by the head factor. We have used the same weights, listed in table 2, proposed by Lappin and Leass (1994). In the example, candidate [Quest Medical Inc] is in the same sentence as the predicate plan, it 4Lexicographic files according to WordNet terminology. Factor type weight Sentence recency 100 Subject 80 Direct object 50 Indirect object 40 Head 80 Non-adverbial 50 Table 2: Weights assigned to each salience factor. belongs to a subject, and, indeed, it is the head of that subject. Hence, the salience score for this candidate is: 100 + 80 + 80 = 260. 4.3 Damping the salience of the default candidate As the algorithm maintains the default candidate until an explicit filler appears, potential errors produced in the automatic selection process explained above can spread to distant implicit instances, specially when the salience score of the default candidate is high. In order to reduce the impact of these errors we have included a damping factor that is applied sentence by sentence to the salience value of the default candidate. ImpAr applies that damping factor, r, as follows. It assumes that, independently of the initial salience assigned, 100 points of the salience score came from the sentence recency factor. Then, the algorithm changes this value multiplying it by r. So, given a salience score s, the value of the score in a following sentence, s′, is: s′ = s −100 + 100 · r Obviously, the value of r must be defined without harming excessively those cases where the default candidate has been correctly identified. For this, we studied in the training dataset the cases of implicit arguments filled with the default candidate. Figure 1 shows that the influence of the default filler is much higher in near sentences that in more distance ones. We tried to mimic a damping factor following this distribution. That is, to maintain high score salience for the near sentences while strongly decreasing them in the subsequent ones. In this way, if the filler of the implicit argument is wrongly identified, the error only spreads to the nearest instances. If the identification is correct, a lower score for more distance sentences is not too harmful. The distribution shown in figure 1 follows an exponential decay, therefore we have described the damping factor as a curve like the following, where α must be a value within 0 and 1: 1184 Figure 1: Distances between the implicit argument and the default candidate. The y axis indicate the percentage of cases occurring in each sentence distance, expressed in x r = αd In this function, d stands for the sentence distance and r for the damping factor to apply in that sentence. In this paper, we have decided to set the value of α to 0.5. r = 0.5d This value maintains the influence of the default fillers with high salience in near sentences. But it decreases that influence strongly in the following. In order to illustrate the whole process we will use the previous example. In that case, [Quest Medical Inc] is selected as the arg0 of plan with a salience score of 260. Therefore [Quest Medical Inc] becomes the default arg0 of plan. In the following sentence the damping factor is: 0.5 = 0.51 Therefore, its salience score changes to 260 − 100+100·0.5 = 210. Then, the algorithm changes the default filler for arg0 only if it finds a candidate that scores higher in their current context. At two sentence distance, the resulting score for the default filler is 260 −100 + 100 · 0.25 = 185. In this way, at more distance sentences, the influence of the default filler of arg0 becomes smaller. 5 Evaluation In order to evaluate the performance of the ImpAr algorithm, we have followed the evaluation method presented by Gerber and Chai (2010, 2012). For every argument position in the goldstandard the scorer expects a single predicted constituent to fill in. In order to evaluate the correct span of a constituent, a prediction is scored using the Dice coefficient: 2|Predicted ∩True| |Predicted| + |True| The function above relates the set of tokens that form a predicted constituent, Predicted, and the set of tokens that are part of an annotated constituent in the gold-standard, True. For each missing argument, the gold-standard includes the whole coreference chain of the filler. Therefore, the scorer selects from all coreferent mentions the highest Dice value. If the predicted span does not cover the head of the annotated filler, the scorer returns zero. Then, Precision is calculated by the sum of all prediction scores divided by the number of attempts carried out by the system. Recall is equal to the sum of the prediction scores divided by the number of actual annotations in the goldstandard. F-measure is calculated as the harmonic mean of recall and precision. Traditionally, there have been two approaches to develop SRL systems, one based on constituent trees and the other one based on syntactic dependencies. Additionally, the evaluation of both types of systems has been performed differently. For constituent based SRL systems the scorers evaluate the correct span of the filler, while for dependency based systems the scorer just check if the systems are able to capture the head token of the filler. As shown above, previous works in implicit argument resolution proposed a metric that involves the correct identification of the whole span of the filler. ImpAr algorithm works with syntactic dependencies and therefore it only returns the head token of the filler. In order to compare our results with previous works, we had to apply some simple heuristics to guess the correct span of the filler. Obviously, this process inserts some noise in the final evaluation. We have performed a first evaluation over the test set used in (Gerber and Chai, 2010). This dataset contains 437 predicate instances but just 246 argument positions are implicitly filled. Table 3 includes the results obtained by ImpAr, the results of the system presented by Gerber and Chai (2010) and the baseline proposed for the task. Best results are marked in bold5. For all predicates, ImpAr improves over the baseline (19.3 points higher in the overall F1). Our system also outperforms the one presented by Gerber and Chai (2010). Interestingly, both systems present very different performances predicate by predicate. For 5No proper significance test can be carried out without the the full predictions of all systems involved. 1185 Baseline Gerber & Chai ImpAr #Inst. #Imp. F1 P R F1 P R F1 sale 64 65 36.2 47.2 41.7 44.2 41.2 39.4 40.3 price 121 53 15.4 36.0 32.6 34.2 53.3 53.3 53.3 investor 78 35 9.8 36.8 40.0 38.4 43.0 39.5 41.2 bid 19 26 32.3 23.8 19.2 21.3 52.9 51.0 52.0 plan 25 20 38.5 78.6 55.0 64.7 40.7 40.7 40.7 cost 25 17 34.8 61.1 64.7 62.9 56.1 50.2 53.0 loss 30 12 52.6 83.3 83.3 83.3 68.4 63.5 65.8 loan 11 9 18.2 42.9 33.3 37.5 25.0 20.0 22.2 investment 21 8 0.0 40.0 25.0 30.8 47.6 35.7 40.8 fund 43 6 0.0 14.3 16.7 15.4 66.7 33.3 44.4 Overall 437 246 26.5 44.5 40.4 42.3 47.9 43.8 45.8 Table 3: Evaluation with the test. The results from (Gerber and Chai, 2010) are included. Baseline Gerber & Chai ImpAr #Inst. #Imp. F1 P R F1 P R F1 sale 184 181 37.3 59.2 44.8 51.0 44.3 43.3 43.8 price 216 138 34.6 56.0 48.7 52.1 55.0 54.5 54.7 investor 160 108 5.1 46.7 39.8 43.0 28.2 27.0 27.6 bid 88 124 23.8 60.0 36.3 45.2 48.4 41.8 45.0 plan 100 77 32.3 59.6 44.1 50.7 47.0 47.0 47.0 cost 101 86 17.8 62.5 50.9 56.1 49.2 43.7 46.2 loss 104 62 54.7 72.5 59.7 65.5 63.0 58.2 60.5 loan 84 82 31.2 67.2 50.0 57.3 56.4 45.6 50.6 investment 102 52 15.5 32.9 34.2 33.6 41.2 30.9 35.4 fund 108 56 15.5 80.0 35.7 49.4 55.6 44.6 49.5 Overall 1,247 966 28.9 57.9 44.5 50.3 47.7 43.0 45.3 Table 4: Evaluation with the full dataset. The results from (Gerber and Chai, 2012) are included. instance, our system obtains much higher results for the predicates bid and fund, while much lower for loss and loan. In general, ImpAr seems to be more robust since it obtains similar performances for all predicates. In fact, the standard deviation, σ , of F1 measure is 10.98 for ImpAr while this value for the (Gerber and Chai, 2010) system is 20.00. In a more recent work, Gerber and Chai (2012) presented some improvements of their previous results. In this work, they extended the evaluation of their model using the whole dataset and not just the testing documents. Applying a crossvalidated approach they tried to solve some problems that they found in the previous evaluation, like the small size of the testing set. For this work, they also studied a wider set of features, specially, they experimented with some statistics learnt from parts of GigaWord automatically annotated. Table 4 shows that the improvement over their previous system was remarkable. The system also seems to be more stable across predicates. For comparison purposes, we also included the performance of ImpAr applied over the whole dataset. The results in table 4 show that, although ImpAr still achieves the best results in some cases, this time, it cannot beat the overall results obtained by the supervised model. In fact, both systems obtain a very similar recall, but the system from (Gerber and Chai, 2012) obtains much higher precision. In both cases, the σ value of F1 is reduced, 8.81 for ImpAr and 8.21 for (Gerber and Chai, 2012). However, ImpAr obtains very similar performance independently of the testing dataset what proves the robustness of the algorithm. This suggests that our algorithm can obtain strong results also for other corpus and predicates. Instead, the supervised approach would need a large amount of manual annotations for every predicate to be processed. 6 Discussion 6.1 Component Analysis In order to assess the contribution of each system component, we also tested the performance of ImpAr algorithm when disabling only one of its components. With this evaluations we pretend to sight the particular contribution of each component. In table 5 we present the results obtained in the following experiments for the two testing sets explained in section 5: • Exp1: The damping factor is disabled. All selected fillers maintain the same salience over 1186 all sentences. • Exp2: Only explicit fillers are considered as candidates6. • Exp3: No default fillers are considered as candidates. As expected, we observe a very similar performances in both datasets. Additionally, the highest loss appears when the default fillers are ruled out (Exp3). In particular, it also seems that the explicit information from previous predicates provides the most correct evidence (Exp2). Also note that for Exp2, the system obtains the highest precision. This means that the most accurate cases are obtained by previous explicit antecedents. test full P R F1 P R F1 full 47.9 43.8 45.8 47.7 43.0 45.3 Exp1 45.7 41.8 43.6 47.1 42.5 44.8 Exp2 51.2 24.6 33.2 55.3 25.5 34.9 Exp3 34.6 29.7 31.9 34.8 28.9 31.5 Exp4 42.6 37.9 40.1 37.5 31.2 34.1 Exp5 38.8 34.5 36.5 35.7 29.7 32.4 Exp6 53.3 48.7 50.9 52.4 47.2 49.6 Table 5: Exp1, Exp2 and Exp3 correspond to ablations of the components. Exp3 and Exp4 are experiments over the cases that are not solved by explicit antecedents. Exp6 evaluates the system capturing just the head tokens of the constituents. As Exp1 also includes instances with explicit antecedents, and for these cases the damping factor component has no effect, we have designed two additional experiments: • Exp4: Full system for the cases not solved by explicit antecedents. • Exp5: As in Exp4 but with the damping factor disabled. As expected, now the contribution of the dumping factor seems to be more relevant, in particular, for the test dataset. 6.2 Correct span of the fillers As explained in Section 5, our algorithm works with syntactic dependencies and its predictions only return the head token of the filler. Obtaining the correct constituents from syntactic dependencies is not trivial. In this work we have applied a simple heuristic that returns all the descendant 6That is, implicit arguments without explicit antecedents are not filled. tokens of the predicted head token. This naive process inserts some noise to the evaluation of the system. For example, from the following sentence our system gives the following prediction for an implicit arg1 of an instance of the predicate sale: Ports of Call Inc. reached agreements to sell its remaining seven aircraft [arg1 to buyers] that weren’t disclosed. But the actual gold-standard annotation is: [arg1 buyers that weren’t disclosed]. Although the head of the constituent, buyers, is correctly captured by ImpAr, the final prediction is heavily penalized by the scoring method. Table 5 presents the results of ImpAr when evaluating the head tokens of the constituents only (Exp6). These results show that the current performance of our system can be easily improved applying a more accurate process for capturing the correct span. 7 Conclusions and Future Work In this work we have presented a robust deterministic approach for implicit Semantic Role Labeling. The method exploits a very simple but relevant discoursive coherence property that holds over explicit and implicit arguments of closely related nominal and verbal predicates. This property states that if several instances of the same predicate appear in a well-written discourse, it is very likely that they maintain the same argument fillers. We have shown the importance of this phenomenon for recovering the implicit information about semantic roles. To our knowledge, this is the first empirical study that proves this phenomenon. Based on these observations, we have developed a new deterministic algorithm, ImpAr, that obtains very competitive and robust performances with respect to supervised approaches. That is, it can be applied where there is no available manual annotations to train. The code of this algorithm is publicly available and can be applied to any document. As input it only needs the document with explicit semantic role labeling and Super-Sense annotations. These annotations can be easily obtained from plain text using available tools7, what makes this algorithm the first effective tool available for implicit SRL. As it can be easily seen, ImpAr has a large margin for improvement. For instance, providing more accurate spans for the fillers. We also plan 7We recommend mate-tools (Bj¨orkelund et al., 2009) and SuperSenseTagger (Ciaramita and Altun, 2006). 1187 to test alternative approaches to solve the arguments without explicit antecedents. For instance, our system can also profit from additional annotations like coreference, that has proved its utility in previous works. Finally, we also plan to study our approach on different languages and datasets (for instance, the SemEval-2010 dataset). 8 Acknowledgment We are grateful to the anonymous reviewers for their insightful comments. This work has been partially funded by SKaTer (TIN201238584-C06-02), OpeNER (FP7-ICT-2011-SMEDCL-296451) and NewsReader (FP7-ICT-20118-316404), as well as the READERS project with the financial support of MINECO, ANR (convention ANR-12-CHRI-0004-03) and EPSRC (EP/K017845/1) in the framework of ERA-NET CHIST-ERA (UE FP7/2007-2013). References Baker, C. F., C. J. Fillmore, and J. B. Lowe (1998). The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, ACL ’98, Montreal, Quebec, Canada, pp. 86–90. Bj¨orkelund, A., L. Hafdell, and P. Nugues (2009). Multilingual semantic role labeling. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL ’09, Boulder, Colorado, USA, pp. 43–48. Carreras, X. and L. M`arquez (2005). Introduction to the conll-2005 shared task: Semantic role labeling. In Proceedings of the 9th Conference on Computational Natural Language Learning, CoNLL ’05, Ann Arbor, Michigan, USA, pp. 152–164. Chen, D., N. Schneider, D. Das, and N. A. Smith (2010). Semafor: Frame argument resolution with log-linear models. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval ’10, Los Angeles, California, USA, pp. 264–267. Ciaramita, M. and Y. Altun (2006). Broadcoverage sense disambiguation and information extraction with a supersense sequence tagger. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ’06, Sydney, Australia, pp. 594–602. Dahl, D. A., M. S. Palmer, and R. J. Passonneau (1987). Nominalizations in pundit. In In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, ACL ’87, Stanford, California, USA, pp. 131–139. Fellbaum, C. (1998). WordNet: an electronic lexical database. MIT Press. Gerber, M. and J. Chai (2012, December). Semantic role labeling of implicit arguments for nominal predicates. Computational Linguistics 38(4), 755–798. Gerber, M. and J. Y. Chai (2010). Beyond nombank: a study of implicit arguments for nominal predicates. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, Uppsala, Sweden, pp. 1583–1592. Gildea, D. and D. Jurafsky (2000). Automatic labeling of semantic roles. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, Hong Kong, pp. 512–520. Gildea, D. and D. Jurafsky (2002, September). Automatic labeling of semantic roles. Computational Linguistics 28(3), 245–288. Gorinski, P., J. Ruppenhofer, and C. Sporleder (2013). Towards weakly supervised resolution of null instantiations. In Proceedings of the 10th International Conference on Computational Semantics, IWCS ’13, Potsdam, Germany, pp. 119–130. Hajiˇc, J., M. Ciaramita, R. Johansson, D. Kawahara, M. A. Mart´ı, L. M`arquez, A. Meyers, J. Nivre, S. Pad´o, J. ˇStˇep´anek, P. Straˇn´ak, M. Surdeanu, N. Xue, and Y. Zhang (2009). The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL ’09, Boulder, Colorado, USA, pp. 1–18. Laparra, E. and G. Rigau (2012). Exploiting explicit annotations and semantic types for implicit argument resolution. In 6th IEEE International Conference on Semantic Computing, ICSC ’12, Palermo, Italy, pp. 75–78. 1188 Laparra, E. and G. Rigau (2013). Sources of evidence for implicit argument resolution. In Proceedings of the 10th International Conference on Computational Semantics, IWCS ’13, Potsdam, Germany, pp. 155–166. Lappin, S. and H. J. Leass (1994, December). An algorithm for pronominal anaphora resolution. Computational Linguistics 20(4), 535–561. Meyers, A., R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman (2004). The nombank project: An interim report. In In Proceedings of the NAACL/HLT Workshop on Frontiers in Corpus Annotation, HLT-NAACL ’04, Boston, Massachusetts, USA, pp. 24–31. Moor, T., M. Roth, and A. Frank (2013). Predicate-specific annotations for implicit role binding: Corpus annotation, data analysis and evaluation experiments. In Proceedings of the 10th International Conference on Computational Semantics, IWCS ’13, Potsdam, Germany, pp. 369–375. Palmer, M., D. Gildea, and P. Kingsbury (2005, March). The proposition bank: An annotated corpus of semantic roles. Computational Linguistics 31(1), 71–106. Palmer, M. S., D. A. Dahl, R. J. Schiffman, L. Hirschman, M. Linebarger, and J. Dowding (1986). Recovering implicit information. In Proceedings of the 24th annual meeting on Association for Computational Linguistics, ACL ’86, New York, New York, USA, pp. 10–19. Raghunathan, K., H. Lee, S. Rangarajan, N. Chambers, M. Surdeanu, D. Jurafsky, and C. Manning (2010). A multi-pass sieve for coreference resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, Cambridge, Massachusetts, USA, pp. 492–501. Ruppenhofer, J., P. Gorinski, and C. Sporleder (2011). In search of missing arguments: A linguistic approach. In Proceedings of the International Conference Recent Advances in Natural Language Processing 2011, RANLP ’11, Hissar, Bulgaria, pp. 331–338. Ruppenhofer, J., C. Sporleder, R. Morante, C. Baker, and M. Palmer (2010). Semeval-2010 task 10: Linking events and their participants in discourse. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval ’10, Los Angeles, California, USA, pp. 45–50. Silberer, C. and A. Frank (2012). Casting implicit role linking as an anaphora resolution task. In Proceedings of the First Joint Conference on Lexical and Computational Semantics, *SEM ’12, Montr´eal, Canada, pp. 1–10. Surdeanu, M., R. Johansson, A. Meyers, L. M`arquez, and J. Nivre (2008). The CoNLL2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the Twelfth Conference on Natural Language Learning, CoNLL ’08, Manchester, United Kingdom, pp. 159–177. Tetreault, J. R. (2002). Implicit role reference. In International Symposium on Reference Resolution for Natural Language Processing, Alicante, Spain, pp. 109–115. Tonelli, S. and R. Delmonte (2010). Venses++: Adapting a deep semantic processing system to the identification of null instantiations. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval ’10, Los Angeles, California, USA, pp. 296–299. Tonelli, S. and R. Delmonte (2011). Desperately seeking implicit arguments in text. In Proceedings of the ACL 2011 Workshop on Relational Models of Semantics, RELMS ’11, Portland, Oregon, USA, pp. 54–62. Whittemore, G., M. Macpherson, and G. Carlson (1991). Event-building through role-filling and anaphora resolution. In Proceedings of the 29th annual meeting on Association for Computational Linguistics, ACL ’91, Berkeley, California, USA, pp. 17–24. 1189
2013
116
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1190–1200, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Cross-lingual Transfer of Semantic Role Labeling Models Mikhail Kozhevnikov and Ivan Titov Saarland University, Postfach 15 11 50 66041 Saarbr¨ucken, Germany {mkozhevn|titov}@mmci.uni-saarland.de Abstract Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method. 1 Background and Motivation Semantic role labeling has proven useful in many natural language processing tasks, such as question answering (Shen and Lapata, 2007; Kaisser and Webber, 2007), textual entailment (Sammons et al., 2009), machine translation (Wu and Fung, 2009; Liu and Gildea, 2010; Gao and Vogel, 2011) and dialogue systems (Basili et al., 2009; van der Plas et al., 2009). Multiple models have been designed to automatically predict semantic roles, and a considerable amount of data has been annotated to train these models, if only for a few more popular languages. As the annotation is costly, one would like to leverage existing resources to minimize the human effort required to construct a model for a new language. A number of approaches to the construction of semantic role labeling models for new languages have been proposed. On one end of the scale is unsupervised SRL, such as Grenager and Manning (2006), which requires some expert knowledge, but no labeled data. It clusters together arguments that should bear the same semantic role, but does not assign a particular role to each cluster. On the other end is annotating a new dataset from scratch. There are also intermediate options, which often make use of similarities between languages. This way, if an accurate model exists for one language, it should help simplify the construction of a model for another, related language. The approaches in this third group often use parallel data to bridge the gap between languages. Cross-lingual annotation projection systems (Pad´o and Lapata, 2009), for example, propagate information directly via word alignment links. However, they are very sensitive to the quality of parallel data, as well as the accuracy of a sourcelanguage model on it. An alternative approach, known as cross-lingual model transfer, or cross-lingual model adaptation, consists of modifying a source-language model to make it directly applicable to a new language. This usually involves constructing a shared feature representation across the two languages. McDonald et al. (2011) successfully apply this idea to the transfer of dependency parsers, using part-ofspeech tags as the shared representation of words. A later extension of T¨ackstr¨om et al. (2012) enriches this representation with cross-lingual word clusters, considerably improving the performance. In the case of SRL, a shared representation that is purely syntactic is likely to be insufficient, since structures with different semantics may be realized by the same syntactic construct, for example “in August” vs “in Britain”. However with the help of recently introduced cross-lingual word represen1190 tations, such as the cross-lingual clustering mentioned above or cross-lingual distributed word representations of Klementiev et al. (2012), we may be able to transfer models of shallow semantics in a similar fashion. In this work we construct a shared feature representation for a pair of languages, employing crosslingual representations of syntactic and lexical information, train a semantic role labeling model on one language and apply it to the other one. This approach yields an SRL model for a new language at a very low cost, effectively requiring only a source language model and parallel data. We evaluate on five (directed) language pairs – EN-ZH, ZH-EN, EN-CZ, CZ-EN and EN-FR, where EN, FR, CZ and ZH denote English, French, Czech and Chinese, respectively. The transferred model is compared against two baselines: an unsupervised SRL system and a model trained on the output of a cross-lingual annotation projection system. In the next section we will describe our setup, then in section 3 present the shared feature representation we use, discuss the evaluation data and other technical aspects in section 4, present the results and conclude with an overview of related work. 2 Setup The purpose of the study is not to develop a yet another semantic role labeling system – any existing SRL system can (after some modification) be used in this setup – but to assess the practical applicability of cross-lingual model transfer to this problem, compare it against the alternatives and identify its strong/weak points depending on a particular setup. 2.1 Semantic Role Labeling Model We consider the dependency-based version of semantic role labeling as described in Hajiˇc et al. (2009) and transfer an SRL model from one language to another. We only consider verbal predicates and ignore the predicate disambiguation stage. We also assume that the predicate identification information is available – in most languages it can be obtained using a relatively simple heuristic based on part-of-speech tags. The model performs argument identification and classification (Johansson and Nugues, 2008) separately in a pipeline – first each candidate is classified as being or not being a head of an argument phrase with respect to the predicate in question and then each of the arguments is assigned a role from a given inventory. The model is factorized over arguments – the decisions regarding the classification of different arguments are made independently of each other. With respect to the use of syntactic annotation we consider two options: using an existing dependency parser for the target language and obtaining one by means of cross-lingual transfer (see section 4.2). Following McDonald et al. (2011), we assume that a part-of-speech tagger is available for the target language. 2.2 SRL in the Low-resource Setting Several approaches have been proposed to obtain an SRL model for a new language with little or no manual annotation. Unsupervised SRL models (Lang and Lapata, 2010) cluster the arguments of predicates in a given corpus according to their semantic roles. The performance of such models can be impressive, especially for those languages where semantic roles correlate strongly with syntactic relation of the argument to its predicate. However, assigning meaningful role labels to the resulting clusters requires additional effort and the model’s parameters generally need some adjustment for every language. If the necessary resources are already available for a closely related language, they can be utilized to facilitate the construction of a model for the target language. This can be achieved either by means of cross-lingual annotation projection (Yarowsky et al., 2001) or by cross-lingual model transfer (Zeman and Resnik, 2008). This last approach is the one we are considering in this work, and the other two options are treated as baselines. The unsupervised model will be further referred to as UNSUP and the projection baseline as PROJ. 2.3 Evaluation Measures We use the F1 measure as a metric for the argument identification stage and accuracy as an aggregate measure of argument classification performance. When comparing to the unsupervised SRL system the clustering evaluation measures are used instead. These are purity and collocation 1191 PU = 1 N X i max j |Gj ∩Ci| CO = 1 N X j max i |Gj ∩Ci|, where Ci is the set of arguments in the i-th induced cluster, Gj is the set of arguments in the jth gold cluster and N is the total number of arguments. We report the harmonic mean of the two (Lang and Lapata, 2011) and denote it F c 1 to avoid confusing it with the supervised metric. 3 Model Transfer The idea of this work is to abstract the model away from the particular source language and apply it to a new one. This setup requires that we use the same feature representation for both languages, for example part-of-speech tags and dependency relation labels should be from the same inventory. Some features are not applicable to certain languages because the corresponding phenomena are absent in them. For example, consider a strongly inflected language and an analytic one. While the latter can usually convey the information encoded in the word form in the former one (number, gender, etc.), finding a shared feature representation for such information is non-trivial. In this study we will confine ourselves to those features that are applicable to all languages in question, namely: part-of-speech tags, syntactic dependency structures and representations of the word’s identity. 3.1 Lexical Information We train a model on one language and apply it to a different one. In order for this to work, the words of the two languages have to be mapped into a common feature space. It is also desirable that closely related words from both languages have similar representations in this space. Word mapping. The first option is simply to use the source language words as the shared representation. Here every source language word would have itself as its representation and every target word would map into a source word that corresponds to it. In other words, we supply the model with a gloss of the target sentence. The mapping (bilingual dictionary) we use is derived from a word-aligned parallel corpus, by identifying, for each word in the target language, the word in the source language it is most often aligned to. Cross-lingual clusters. There is no guarantee that each of the words in the evaluation data is present in our dictionary, nor that the corresponding source-language word is present in the training data, so the model would benefit from the ability to generalize over closely related words. This can, for example, be achieved by using cross-lingual word clusters induced in T¨ackstr¨om et al. (2012). We incorporate these clusters as features into our model. 3.2 Syntactic Information Part-of-speech Tags. We map part-of-speech tags into the universal tagset following Petrov et al. (2012). This may have a negative effect on the performance of a monolingual model, since most part-of-speech tagsets are more fine-grained than the universal POS tags considered here. For example Penn Treebank inventory contains 36 tags and the universal POS tagset – only 12. Since the finergrained POS tags often reflect more languagespecific phenomena, however, they would only be useful for very closely related languages in the cross-lingual setting. The universal part-of-speech tags used in evaluation are derived from gold-standard annotation for all languages except French, where predicted ones had to be used instead. Dependency Structure. Another important aspect of syntactic information is the dependency structure. Most dependency relation inventories are language-specific, and finding a shared representation for them is a challenging problem. One could map dependency relations into a simplified form that would be shared between languages, as it is done for part-of-speech tags in Petrov et al. (2012). The extent to which this would be useful, however, depends on the similarity of syntactic-semantic interfaces of the languages in question. In this work we discard the dependency relation labels where the inventories do not match and only consider the unlabeled syntactic dependency graph. Some discrepancies, such as variations in attachment order, may be present even there, but this does not appear to be the case with the datasets we use for evaluation. If a target language is poor in resources, one can obtain a dependency parser for the target language by means of cross-lingual model transfer (Zeman and Resnik, 2008). We 1192 take this into account and evaluate both using the original dependency structures and the ones obtained by means of cross-lingual model transfer. 3.3 The Model The model we use is based on that of Bj¨orkelund et al. (2009). It is comprised of a set of linear classifiers trained using Liblinear (Fan et al., 2008). The feature model was modified to accommodate the cross-lingual cluster features and the reranker component was not used. We do not model the interaction between different argument roles in the same predicate. While this has been found useful, in the cross-lingual setup one has to be careful with the assumptions made. For example, modeling the sequence of roles using a Markov chain (Thompson et al., 2003) may not work well in the present setting, especially between distant languages, as the order or arguments is not necessarily preserved. Most constraints that prove useful for SRL (Chang et al., 2007) also require customization when applied to a new language, and some rely on languagespecific resources, such as a valency lexicon. Taking into account the interaction between different arguments of a predicate is likely to improve the performance of the transferred model, but this is outside the scope of this work. 3.4 Feature Selection Compatibility of feature representations is necessary but not sufficient for successful model transfer. We have to make sure that the features we use are predictive of similar outcomes in the two languages as well. Depending on the pair of languages in question, different aspects of the feature representation will retain or lose their predictive power. We can be reasonably certain that the identity of an argument word is predictive of its semantic role in any language, but it might or might not be true of, for example, the word directly preceding the argument word. It is therefore important to prePOS part-of-speech tags Synt unlabeled dependency graph Cls cross-lingual word clusters Gloss glossed word forms Deprel dependency relations Table 1: Feature groups. vent the model from capturing overly specific aspects of the source language, which we do by confining the model to first-order features. We also avoid feature selection, which, performed on the source language, is unlikely to help the model to better generalize to the target one. The experiments confirm that feature selection and the use of second-order features degrade the performance of the transferred model. 3.5 Feature Groups For each word, we use its part-of-speech tag, cross-lingual cluster id, word identity (glossed, when evaluating on the target language) and its dependency relation to its parent. Features associated with an argument word include the attributes of the predicate word, the argument word, its parent, siblings and children, and the words directly preceding and following it. Also included are the sequences of part-of-speech tags and dependency relations on the path between the predicate and the argument. Since we are also interested in the impact of different aspects of the feature representation, we divide the features into groups as summarized in table 1 and evaluate their respective contributions to the performance of the model. If a feature group is enabled – the model has access to the corresponding source of information. For example, if only POS group is enabled, the model relies on the part-of-speech tags of the argument, the predicate and the words to the right and left of the argument word. If Synt is enabled too, it also uses the POS tags of the argument’s parent, children and siblings. Word order information constitutes an implicit group that is always available. It includes the Position feature, which indicates whether the argument is located to the left or to the right of the predicate, and allows the model to look up the attributes of the words directly preceding and following the argument word. The model we compare against the baselines uses all applicable feature groups (Deprel is only used in EN-CZ and CZ-EN experiments with original syntax). 4 Evaluation 4.1 Datasets and Preprocessing Evaluation of the cross-lingual model transfer requires a rather specific kind of dataset. Namely, the data in both languages has to be annotated 1193 with the same set of semantic roles following the same (or compatible) guidelines, which is seldom the case. We have identified three language pairs for which such resources are available: EnglishChinese, English-Czech and English-French. The evaluation datasets for English and Chinese are those from the CoNLL Shared Task 2009 (Hajiˇc et al., 2009) (henceforth CoNLL-ST). Their annotation in the CoNLL-ST is not identical, but the guidelines for “core” semantic roles are similar (Kingsbury et al., 2004), so we evaluate only on core roles here. The data for the second language pair is drawn from the Prague Czech-English Dependency Treebank 2.0 (Hajiˇc et al., 2012), which we converted to a format similar to that of CoNLL-ST1. The original annotation uses the tectogrammatical representation (Hajiˇc, 2002) and an inventory of semantic roles (or functors), most of which are interpretable across various predicates. Also note that the syntactic annotation of English and Czech in PCEDT 2.0 is quite similar (to the extent permitted by the difference in the structure of the two languages) and we can use the dependency relations in our experiments. For English-French, the English CoNLL-ST dataset was used as a source and the model was evaluated on the manually annotated dataset from van der Plas et al. (2011). The latter contains one thousand sentences from the French part of the Europarl (Koehn, 2005) corpus, annotated with semantic roles following an adapted version of PropBank (Palmer et al., 2005) guidelines. The authors perform annotation projection from English to French, using a joint model of syntax and semantics and employing heuristics for filtering. We use a model trained on the output of this projection system as one of the baselines. The evaluation dataset is relatively small in this case, so we perform the transfer only one-way, from English to French. The part-of-speech tags in all datasets were replaced with the universal POS tags of Petrov et al. (2012). For Czech, we have augmented the mappings to account for the tags that were not present in the datasets from which the original mappings were derived. Namely, tag “t” is mapped to “VERB” and “Y” – to “PRON”. We use parallel data to construct a bilingual dictionary used in word mapping, as well as in the projection baseline. For English-Czech 1see http://www.ml4nlp.de/code-and-data/treex2conll and English-French, the data is drawn from Europarl (Koehn, 2005), for English-Chinese – from MultiUN (Eisele and Chen, 2010). The word alignments were obtained using GIZA++ (Och and Ney, 2003) and the intersection heuristic. 4.2 Syntactic Transfer In the low-resource setting, we cannot always rely on the availability of an accurate dependency parser for the target language. If one is not available, the natural solution would be to use crosslingual model transfer to obtain it. Unfortunately, the models presented in the previous work, such as Zeman and Resnik (2008), McDonald et al. (2011) and T¨ackstr¨om et al. (2012), were not made available, so we reproduced the direct transfer algorithm of McDonald et al. (2011), using Malt parser (Nivre, 2008) and the same set of features. We did not reimplement the projected transfer algorithm, however, and used the default training procedure instead of perceptron-based learning. The dependency structure thus obtained is, of course, only a rough approximation – even a much more sophisticated algorithm may not perform well when transferring syntax between such languages as Czech and English, given the inherent difference in their structure. The scores are shown in table 2. We will henceforth refer to the syntactic annotations that were provided with the datasets as original, as opposed to the annotations obtained by means of syntactic transfer. 4.3 Baselines Unsupervised Baseline: We are using a version of the unsupervised semantic role induction system of Titov and Klementiev (2012a) adapted to Setup UAS, % EN-ZH 35 ZH-EN 42 EN-CZ 36 CZ-EN 39 EN-FR 67 Table 2: Syntactic transfer accuracy, unlabeled attachment score (percent). Note that in case of French we evaluate against the output of a supervised system, since manual annotation is not available for this dataset. This score does not reflect the true performance of syntactic transfer. 1194 the shared feature representation considered in order to make the scores comparable with those of the transfer model and, more importantly, to enable evaluation on transferred syntax. Note that the original system, tailored to a more expressive language-specific syntactic representation and equipped with heuristics to identify active/passive voice and other phenomena, achieves higher scores than those we report here. Projection Baseline: The projection baseline we use for English-Czech and English-Chinese is a straightforward one: we label the source side of a parallel corpus using the source-language model, then identify those verbs on the target side that are aligned to a predicate, mark them as predicates and propagate the argument roles in the same fashion. A model is then trained on the resulting training data and applied to the test set. For English-French we instead use the output of a fully featured projection model of van der Plas et al. (2011), published in the CLASSiC project. 5 Results In order to ensure that the results are consistent, the test sets, except for the French one, were partitioned into five equal parts (of 5 to 10 thousand sentences each, depending on the dataset) and the evaluation performed separately on each one. All evaluation figures for English, Czech or Chinese below are the average values over the five subsets. In case of French, the evaluation dataset is too small to split it further, so instead we ran the evaluation five times on a randomly selected 80% sample of the evaluation data and averaged over those. In both cases the results are consistent over the subsets, the standard deviation does not exceed 0.5% for the transfer system and projection baseline and 1% for the unsupervised system. 5.1 Argument Identification We summarize the results in table 3. Argument identification is known to rely heavily on syntactic information, so it is unsurprising that it proves inaccurate when transferred syntax is used. Our simple projection baseline suffers from the same problem. Even with original syntactic information available, the performance of argument identification is moderate. Note that the model of (van der Plas et al., 2011), though relying on more expressive syntax, only outperforms the transferred system by 3% (F1) on this task. Setup Syntax TRANS PROJ EN-ZH trans 34.5 13.9 ZH-EN trans 32.6 15.6 EN-CZ trans 46.3 12.4 CZ-EN trans 42.3 22.2 EN-FR trans 61.6 43.5 EN-ZH orig 51.7 19.6 ZH-EN orig 53.2 29.7 EN-CZ orig 63.9 59.3 CZ-EN orig 67.3 60.9 EN-FR orig 71.0 51.3 Table 3: Argument identification, transferred model vs. projection baseline, F1. Most unsupervised SRL approaches assume that the argument identification is performed by some external means, for example heuristically (Lang and Lapata, 2011). Such heuristics or unsupervised approaches to argument identification (Abend et al., 2009) can also be used in the present setup. 5.2 Argument Classification In the following tables, TRANS column contains the results for the transferred system, UNSUP – for the unsupervised baseline and PROJ – for projection baseline. We highlight in bold the higher score where the difference exceeds twice the maximum of the standard deviation estimates of the two results. Table 4 presents the unsupervised evaluation results. Note that the unsupervised model performs as well as the transferred one or better where the Setup Syntax TRANS UNSUP EN-ZH trans 83.3 73.9 ZH-EN trans 79.2 67.6 EN-CZ trans 66.4 66.1 CZ-EN trans 68.2 68.7 EN-FR trans 74.6 65.1 EN-ZH orig 84.5 89.7 ZH-EN orig 79.2 83.0 EN-CZ orig 74.1 74.0 CZ-EN orig 74.6 76.7 EN-FR orig 73.3 72.3 Table 4: Argument classification, transferred model vs. unsupervised baseline in terms of the clustering metric F c 1 (see section 2.3). 1195 Setup Syntax TRANS PROJ EN-ZH trans 70.1 69.2 ZH-EN trans 65.6 61.3 EN-CZ trans 50.1 46.3 CZ-EN trans 53.3 54.7 EN-FR trans 65.1 66.1 EN-ZH orig 71.7 69.7 ZH-EN orig 66.1 64.4 EN-CZ orig 59.0 53.2 CZ-EN orig 61.0 60.8 EN-FR orig 63.0 68.0 Table 5: Argument classification, transferred model vs. projection baseline, accuracy. original syntactic dependencies are available. In the more realistic scenario with transferred syntax, however, the transferred model proves more accurate. In table 5 we compare the transferred system with the projection baseline. It is easy to see that the scores vary strongly depending on the language pair, due to both the difference in the annotation scheme used and the degree of relatedness between the languages. The drop in performance when transferring the model to another language is large in every case, though, see table 6. Setup Target Source EN-ZH 71.7 87.1 ZH-EN 66.1 86.2 EN-CZ 59.0 80.1 CZ-EN 61.0 75.4 EN-FR 63.0 82.5 Table 6: Model accuracy on the source and target language using original syntax. The source language scores for English vary between language pairs because of the difference in syntactic annotation and role subset used. We also include the individual F1 scores for the top-10 most frequent labels for EN-CZ transfer with original syntax in table 7. The model provides meaningful predictions here, despite low overall accuracy. Most of the labels2 are self-explanatory: Patient (PAT), Actor (ACT), Time (TWHEN), Effect (EFF), Location (LOC), Manner (MANN), Addressee (ADDR), Extent (EXT). CPHR marks the 2http://ufal.mff.cuni.cz/∼toman/pcedt/en/functors.html Label Freq. F1 Re. Pr. PAT 14707 69.4 70.0 68.7 ACT 14303 81.1 81.7 80.4 TWHEN 3631 70.6 65.1 77.0 EFF 2601 45.4 67.2 34.3 LOC 1990 41.8 35.3 51.3 MANN 1208 54.0 63.8 46.9 ADDR 1045 30.2 34.4 26.8 CPHR 791 20.4 13.1 45.0 EXT 708 42.2 40.5 44.1 DIR3 695 20.1 17.3 23.9 Table 7: EN-CZ transfer (with original syntax), F1, recall and precision for the top-10 most frequent roles. nominal part of a complex predicate, as in “to have [a plan]CPHR”, and DIR3 indicates destination. 5.3 Additional Experiments We now evaluate the contribution of different aspects of the feature representation to the performance of the model. Table 8 contains the results for English-French. Features Orig Trans POS 47.5 47.5 POS, Synt 53.0 53.1 POS, Cls 53.7 53.7 POS, Gloss 63.7 63.7 POS, Synt, Cls 55.9 56.4 POS, Synt, Gloss 65.2 66.3 POS, Cls, Gloss 61.5 61.5 POS, Synt, Cls, Gloss 63.0 65.1 Table 8: EN-FR model transfer accuracy with different feature subsets, using original and transferred syntactic information. The fact that the model performs slightly better with transferred syntax may be explained by two factors. Firstly, as we already mentioned, the original syntactic annotation is also produced automatically. Secondly, in the model transfer setup it is more important how closely the syntacticsemantic interface on the target side resembles that on the source side than how well it matches the “true” structure of the target language, and in this respect a transferred dependency parser may have an advantage over one trained on target-language data. The high impact of the Gloss features here 1196 may be partly attributed to the fact that the mapping is derived from the same corpus as the evaluation data – Europarl (Koehn, 2005) – and partly by the similarity between English and French in terms of word order, usage of articles and prepositions. The moderate contribution of the crosslingual cluster features are likely due to the insufficient granularity of the clustering for this task. For more distant language pairs, the contributions of individual feature groups are less interpretable, so we only highlight a few observations. First of all, both EN-CZ and CZ-EN benefit noticeably from the use of the original syntactic annotation, including dependency relations, but not from the transferred syntax, most likely due to the low syntactic transfer performance. Both perform better when lexical information is available, although the improvement is not as significant as in the case of French – only up to 5%. The situation with Chinese is somewhat complicated in that adding lexical information here fails to yield an improvement in terms of the metric considered. This is likely due to the fact that we consider only the core roles, which can usually be predicted with high accuracy based on syntactic information alone. 6 Related Work Development of robust statistical models for core NLP tasks is a challenging problem, and adaptation of existing models to new languages presents a viable alternative to exhaustive annotation for each language. Although the models thus obtained are generally imperfect, they can be further refined for a particular language and domain using techniques such as active learning (Settles, 2010; Chen et al., 2011). Cross-lingual annotation projection (Yarowsky et al., 2001) approaches have been applied extensively to a variety of tasks, including POS tagging (Xi and Hwa, 2005; Das and Petrov, 2011), morphology segmentation (Snyder and Barzilay, 2008), verb classification (Merlo et al., 2002), mention detection (Zitouni and Florian, 2008), LFG parsing (Wr´oblewska and Frank, 2009), information extraction (Kim et al., 2010), SRL (Pad´o and Lapata, 2009; van der Plas et al., 2011; Annesi and Basili, 2010; Tonelli and Pianta, 2008), dependency parsing (Naseem et al., 2012; Ganchev et al., 2009; Smith and Eisner, 2009; Hwa et al., 2005) or temporal relation prediction (Spreyer and Frank, 2008). Interestingly, it has also been used to propagate morphosyntactic information between old and modern versions of the same language (Meyer, 2011). Cross-lingual model transfer methods (McDonald et al., 2011; Zeman and Resnik, 2008; Durrett et al., 2012; Søgaard, 2011; Lopez et al., 2008) have also been receiving much attention recently. The basic idea behind model transfer is similar to that of cross-lingual annotation projection, as we can see from the way parallel data is used in, for example, McDonald et al. (2011). A crucial component of direct transfer approaches is the unified feature representation. There are at least two such representations of lexical information (Klementiev et al., 2012; T¨ackstr¨om et al., 2012), but both work on word level. This makes it hard to account for phenomena that are expressed differently in the languages considered, for example the syntactic function of a certain word may be indicated by a preposition, inflection or word order, depending on the language. Accurate representation of such information would require an extra level of abstraction (Hajiˇc, 2002). A side-effect of using adaptation methods is that we are forced to use the same annotation scheme for the task in question (SRL, in our case), which in turn simplifies the development of cross-lingual tools for downstream tasks. Such representations are also likely to be useful in machine translation. Unsupervised semantic role labeling methods (Lang and Lapata, 2010; Lang and Lapata, 2011; Titov and Klementiev, 2012a; Lorenzo and Cerisara, 2012) also constitute an alternative to cross-lingual model transfer. For an overview of of semi-supervised approaches we refer the reader to Titov and Klementiev (2012b). 7 Conclusion We have considered the cross-lingual model transfer approach as applied to the task of semantic role labeling and observed that for closely related languages it performs comparably to annotation projection approaches. It allows one to quickly construct an SRL model for a new language without manual annotation or language-specific heuristics, provided an accurate model is available for one of the related languages along with a certain amount of parallel data for the two languages. While an1197 notation projection approaches require sentenceand word-aligned parallel data and crucially depend on the accuracy of the syntactic parsing and SRL on the source side of the parallel corpus, cross-lingual model transfer can be performed using only a bilingual dictionary. Unsupervised SRL approaches have their advantages, in particular when no annotated data is available for any of the related languages and there is a syntactic parser available for the target one, but the annotation they produce is not always sufficient. In applications such as Information Retrieval it is preferable to have precise labels, rather than just clusters of arguments, for example. Also note that when applying cross-lingual model transfer in practice, one can improve upon the performance of the simplistic model we use for evaluation, for example by picking the features manually, taking into account the properties of the target language. Domain adaptation techniques can also be employed to adjust the model to the target language. Acknowledgments The authors would like to thank Alexandre Klementiev and Ryan McDonald for useful suggestions and T¨ackstr¨om et al. (2012) for sharing the cross-lingual word representations. This research is supported by the MMCI Cluster of Excellence. References Omri Abend, Roi Reichart, and Ari Rappoport. 2009. Unsupervised argument identification for semantic role labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, ACL ’09, pages 28–36, Stroudsburg, PA, USA. Association for Computational Linguistics. Paolo Annesi and Roberto Basili. 2010. Cross-lingual alignment of FrameNet annotations through hidden Markov models. In Proceedings of the 11th international conference on Computational Linguistics and Intelligent Text Processing, CICLing’10, pages 12– 25, Berlin, Heidelberg. Springer-Verlag. Roberto Basili, Diego De Cao, Danilo Croce, Bonaventura Coppola, and Alessandro Moschitti. 2009. Cross-language frame semantics transfer in bilingual corpora. In Alexander F. Gelbukh, editor, Proceedings of the 10th International Conference on Computational Linguistics and Intelligent Text Processing, pages 332–345. Anders Bj¨orkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 43–48, Boulder, Colorado, June. Association for Computational Linguistics. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint-driven learning. In ACL. Chenhua Chen, Alexis Palmer, and Caroline Sporleder. 2011. Enhancing active learning for semantic role labeling via compressed dependency trees. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 183–191, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. Proceedings of the Association for Computational Linguistics. Greg Durrett, Adam Pauls, and Dan Klein. 2012. Syntactic transfer using a bilingual lexicon. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1–11, Jeju Island, Korea, July. Association for Computational Linguistics. Andreas Eisele and Yu Chen. 2010. MultiUN: A multilingual corpus from United Nation documents. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10). European Language Resources Association (ELRA). Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of the 47th Annual Meeting of the ACL, pages 369–377, Stroudsburg, PA, USA. Association for Computational Linguistics. Qin Gao and Stephan Vogel. 2011. Corpus expansion for statistical machine translation with semantic role label substitution rules. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 294–298, Portland, Oregon, USA. Trond Grenager and Christopher D. Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In Proceedings of EMNLP. Jan Hajiˇc. 2002. Tectogrammatical representation: Towards a minimal transfer in machine translation. In Robert Frank, editor, Proceedings of the 6th International Workshop on Tree Adjoining Grammars 1198 and Related Frameworks (TAG+6), pages 216— 226, Venezia. Universita di Venezia. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 1–18, Boulder, Colorado. Jan Hajiˇc, Eva Hajiˇcov´a, Jarmila Panevov´a, Petr Sgall, Ondˇrej Bojar, Silvie Cinkov´a, Eva Fuˇc´ıkov´a, Marie Mikulov´a, Petr Pajas, Jan Popelka, Jiˇr´ı Semeck´y, Jana ˇSindlerov´a, Jan ˇStˇep´anek, Josef Toman, Zdeˇnka Ureˇsov´a, and Zdenˇek ˇZabokrtsk´y. 2012. Announcing Prague Czech-English dependency treebank 2.0. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uˇgur Doˇgan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey, May. European Language Resources Association (ELRA). Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel text. Natural Language Engineering, 11(3):311–325. Richard Johansson and Pierre Nugues. 2008. Dependency-based semantic role labeling of PropBank. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 69–78, Honolulu, Hawaii. Michael Kaisser and Bonnie Webber. 2007. Question answering based on semantic roles. In ACL Workshop on Deep Linguistic Processing. Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2010. A cross-lingual annotation projection approach for relation detection. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 564–571, Stroudsburg, PA, USA. Association for Computational Linguistics. Paul Kingsbury, Nianwen Xue, and Martha Palmer. 2004. Propbanking in parallel. In In Proceedings of the Workshop on the Amazing Utility of Parallel and Comparable Corpora, in conjunction with LREC’04. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of the International Conference on Computational Linguistics (COLING), Bombay, India. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. AAMT. Joel Lang and Mirella Lapata. 2010. Unsupervised induction of semantic roles. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 939–947, Los Angeles, California, June. Association for Computational Linguistics. Joel Lang and Mirella Lapata. 2011. Unsupervised semantic role induction via split-merge clustering. In Proc. of Annual Meeting of the Association for Computational Linguistics (ACL). Ding Liu and Daniel Gildea. 2010. Semantic role features for machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), Beijing, China. Adam Lopez, Daniel Zeman, Michael Nossal, Philip Resnik, and Rebecca Hwa. 2008. Cross-language parser adaptation between related languages. In IJCNLP-08 Workshop on NLP for Less Privileged Languages, pages 35–42, Hyderabad, India, January. Alejandra Lorenzo and Christophe Cerisara. 2012. Unsupervised frame based semantic role induction: application to French and English. In Proceedings of the ACL 2012 Joint Workshop on Statistical Parsing and Semantic Processing of Morphologically Rich Languages, pages 30–35, Jeju, Republic of Korea, July. Association for Computational Linguistics. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 62–72, Stroudsburg, PA, USA. Association for Computational Linguistics. Paola Merlo, Suzanne Stevenson, Vivian Tsang, and Gianluca Allaria. 2002. A multi-lingual paradigm for automatic verb classification. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL’02), pages 207– 214, Philadelphia, PA. Roland Meyer. 2011. New wine in old wineskins?– Tagging old Russian via annotation projection from modern translations. Russian Linguistics, 35(2):267(15). Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 629–637, Jeju Island, Korea, July. Association for Computational Linguistics. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Comput. Linguist., 34(4):513–553, December. 1199 Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1). Sebastian Pad´o and Mirella Lapata. 2009. Crosslingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36:307– 340. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31:71–105. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of LREC, May. Mark Sammons, Vinod Vydiswaran, Tim Vieira, Nikhil Johri, Ming wei Chang, Dan Goldwasser, Vivek Srikumar, Gourab Kundu, Yuancheng Tu, Kevin Small, Joshua Rule, Quang Do, and Dan Roth. 2009. Relation alignment for textual entailment recognition. In Text Analysis Conference (TAC). Burr Settles. 2010. Active learning literature survey. Computer Sciences Technical Report, 1648. Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In EMNLP. David A Smith and Jason Eisner. 2009. Parser adaptation and projection with quasi-synchronous grammar features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 822–831. Association for Computational Linguistics. Benjamin Snyder and Regina Barzilay. 2008. Crosslingual propagation for morphological analysis. In Proceedings of the 23rd national conference on Artificial intelligence. Anders Søgaard. 2011. Data point selection for crosslanguage adaptation of dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, volume 2 of HLT ’11, pages 682–686, Stroudsburg, PA, USA. Association for Computational Linguistics. Kathrin Spreyer and Anette Frank. 2008. Projectionbased acquisition of a temporal labeller. Proceedings of IJCNLP 2008. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proc. of the Annual Meeting of the North American Association of Computational Linguistics (NAACL), pages 477– 487, Montr´eal, Canada. Cynthia A. Thompson, Roger Levy, and Christopher D. Manning. 2003. A generative model for semantic role labeling. In Proceedings of the 14th European Conference on Machine Learning, ECML 2003, pages 397–408, Dubrovnik, Croatia. Ivan Titov and Alexandre Klementiev. 2012a. A Bayesian approach to unsupervised semantic role induction. In Proc. of European Chapter of the Association for Computational Linguistics (EACL). Ivan Titov and Alexandre Klementiev. 2012b. Semisupervised semantic role labeling: Approaching from an unsupervised perspective. In Proceedings of the International Conference on Computational Linguistics (COLING), Bombay, India, December. Sara Tonelli and Emanuele Pianta. 2008. Frame information transfer from English to Italian. In Proceedings of LREC 2008. Lonneke van der Plas, James Henderson, and Paola Merlo. 2009. Domain adaptation with artificial data for semantic parsing of speech. In Proc. 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 125–128, Boulder, Colorado. Lonneke van der Plas, Paola Merlo, and James Henderson. 2011. Scaling up automatic cross-lingual semantic role annotation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, HLT ’11, pages 299–304, Stroudsburg, PA, USA. Association for Computational Linguistics. Alina Wr´oblewska and Anette Frank. 2009. Crosslingual projection of LFG F-structures: Building an F-structure bank for Polish. In Eighth International Workshop on Treebanks and Linguistic Theories, page 209. Dekai Wu and Pascale Fung. 2009. Can semantic role labeling improve SMT? In Proceedings of 13th Annual Conference of the European Association for Machine Translation (EAMT 2009), Barcelona. Chenhai Xi and Rebecca Hwa. 2005. A backoff model for bootstrapping resources for non-English languages. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 851–858, Stroudsburg, PA, USA. David Yarowsky, Grace Ngai, and Ricahrd Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of Human Language Technology Conference. Daniel Zeman and Philip Resnik. 2008. Crosslanguage parser adaptation between related languages. In Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages, pages 35– 42, Hyderabad, India, January. Asian Federation of Natural Language Processing. Imed Zitouni and Radu Florian. 2008. Mention detection crossing the language barrier. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 1200
2013
117
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1201–1211, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics DERIVBASE: Inducing and Evaluating a Derivational Morphology Resource for German Britta Zeller∗ Jan Šnajder† Sebastian Padó∗ ∗Heidelberg University, Institut für Computerlinguistik 69120 Heidelberg, Germany †University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3, 10000 Zagreb, Croatia {zeller, pado}@cl.uni-heidelberg.de [email protected] Abstract Derivational models are still an underresearched area in computational morphology. Even for German, a rather resourcerich language, there is a lack of largecoverage derivational knowledge. This paper describes a rule-based framework for inducing derivational families (i.e., clusters of lemmas in derivational relationships) and its application to create a highcoverage German resource, DERIVBASE, mapping over 280k lemmas into more than 17k non-singleton clusters. We focus on the rule component and a qualitative and quantitative evaluation. Our approach achieves up to 93% precision and 71% recall. We attribute the high precision to the fact that our rules are based on information from grammar books. 1 Introduction Morphological processing is generally recognized as an important step for many NLP tasks. Morphological analyzers such as lemmatizers and part of speech (POS) taggers are commonly the first NLP tools developed for any language (Koskenniemi, 1983; Brill, 1992). They are also applied in NLP applications where little other linguistic analysis is performed, such as linguistic annotation of corpora or terminology acquisition; see Daille et al. (2002) for an informative summary. Most work on computational morphology has focused on inflectional morphology, that is, the handling of grammatically determined variation of form (Bickel and Nichols, 2001), which can be understood, overimplifying somewhat, as a normalization step. Derivational morphology, which is concerned with the formation of new words from existing ones, has received less attention. Examples are nominalization (to understand →the understanding), verbalization (the shelf →to shelve), and adjectivization (the size →sizable). Part of the reason for the relative lack of attention lies in the morphological properties of English, such as the presence of many zero derivations (the fish → to fish), the dominance of suffixation, and the relative absence of stem changes in derivation. For these reasons, simple stemming algorithms (Porter, 1980) provide a cheap and accurate approximation to English derivation. Two major NLP resources deal with derivation. WordNet lists so-called “morphosemantic” relations (Fellbaum et al., 2009) for English, and a number of proposals exist for extending WordNets in other languages with derivational relations (Bilgin et al., 2004; Pala and Hlaváˇcková, 2007). CatVar, the “Categorial Variation Database of English” (Habash and Dorr, 2003), is a lexicon aimed specifically at derivation. It groups English nouns, verbs, adjectives, and adverbs into derivational equivalence classes or derivational families such as askV askerN askingN askingA Derivational families are commonly understood as groups of derivationally related lemmas (Daille et al., 2002; Milin et al., 2009). The lemmas in CatVar come from various open word classes, and multiple words may be listed for the same POS. The above family lists two nouns: an event noun (asking) and an agentive noun (asker). However, CatVar does not consider prefixation, which is why, e.g., the adjective unasked is missing. CatVar has found application in different areas of English NLP. Examples are the acquisition of paraphrases that cut across POS lines, applied, for example, in textual entailment (Szpektor and Dagan, 2008; Berant et al., 2012). Then there is the induction and extension of semantic roles resources for predicates of various parts of speech (Meyers et al., 2004; Green et al., 2004). Finally, CatVar has 1201 been used as a lexical resource to generate sentence intersections (Thadani and McKeown, 2011). In this paper, we describe the project of obtaining derivational knowledge for German to enable similar applications. Even though there are two derivational resources for this language, IMSLEX (Fitschen, 2004) and CELEX (Baayen et al., 1996), both have shortcomings. The former does not appear to be publicly available, and the latter has a limited coverage (50k lemmas) and does not explicitly represent derivational relationships within families, which are necessary for fine-grained optimization of families. For this reason, we look into building a novel derivational resource for German. Unfortuantely, the approach used to build CatVar cannot be adopted: it builds on a collection of high-quality lexical-semantic resources such as NOMLEX (Macleod et al., 1998), which are not available for German. Instead, we employ a rule-based framework to define derivation rules that cover both suffixation and prefixation and describes stem changes. Following the work of Šnajder and Dalbelo Baši´c (2010), we define the derivational processes using derivational rules and higher-order string transformation functions. The derivational rules induce a partition of the language’s lemmas into derivational families. Our method is applicable to many languages if the following are available: (1) a comprehensive set of lemmas (optionally including gender information); (2) knowledge about admissible derivational patterns, which can be gathered, for example, from linguistics textbooks. The result is a freely available high-precision high-coverage resource for German derivational morphology that has a structure parallel to CatVar, but was obtained without using manually constructed lexical-semantic resources. We conduct a thorough evaluation of the induced derivational families both regarding precision and recall. Plan of the paper. Section 2 discusses prior work. Section 3 defines our derivation model that is applied to German in Section 4. Sections 5 and 6 present our evaluation setup and results. Section 7 concludes the paper and outlines future work. 2 Related Work Computational models of morphology have a long tradition. Koskenniemi (1983) was the first who analyzed and generated morphological phenomena computationally. His two-level theory has been applied in finite state transducers (FST) for several languages (Karttunen and Beesley, 2005). Many recent approaches automatically induce morphological information from corpora. They are either based solely on corpus statistics (Déjean, 1998), measure semantic similarity between input and output lemma (Schone and Jurafsky, 2000), or bootstrap derivation rules starting from seed examples (Piasecki et al., 2012). Hammarström and Borin (2011) give an extensive overview of stateof-the-art unsupervised learning of morphology. Unsupervised approaches operate at the level of word-forms and have complementary strengths and weaknesses to rule-based approaches. On the upside, they do not require linguistic knowledge; on the downside, they have a harder time distinguishing between derivation and inflection, which may result in lower precision, and are not guaranteed to yield analyses that correspond to linguistic intuition. An exception is the work by Gaussier (1999), who applies an unsupervised model to construct derivational families for French. For German, several morphological tools exist. Morphix is a classification-based analyzer and generator of German words on the inflectional level (Finkler and Neumann, 1988). SMOR (Schmid et al., 2004) employs a finite-state transducer to analyze German words at the inflectional, derivational, and compositional level, and has been used in other morphological analyzers, e.g., Morphisto (Zielinski and Simon, 2008). The site canoonet1 offers broad-coverage information about the German language including derivational word formation. 3 Framework In this section, we describe our rule-based model of derivation, its operation to define derivational families, and the application of the model to German. We note that the model is purely surface-based, i.e., it does not model any semantic regularities beyond those implicit in string transformations. We begin by outlining the characteristics of German derivational morphology. 3.1 German Derivational Morphology As German is a morphologically complex language, we analyzed its derivation processes before implementing our rule-based model. We relied on traditional grammar books and lexicons, e.g., Hoeppner (1980) and Augst (1975), in order to linguistically 1http://canoo.net 1202 justify our assumptions as well as to achieve the best possible precision and coverage. We concentrate on German derivational processes that involve nouns, verbs, and adjectives.2 Nouns are simple to recognize due to capitalization: stauenV – StauN (to jam – jam), essenV – EssenN (to eat – food). Verbs bear three typical suffixes (-en, -eln, -ern). An example of a derived verb is festA – festigenV (tight – to tighten), where -ig is the derivational suffix. Adjectivization works similarlty: TagN – täglichA (day – daily). This example shows that derivation can also involve stem changes in the form of umlaut (e.g., a →ä) and ablaut shift, e.g., siedenV – SudN (to boil – infusion). Other frequent processes in German derivation are circumfixation (HaftN – inhaftierenV (arrest – to arrest)) and prefixation (hebenV – behebenV (to raise – to remedy)). Prefixation often indicates a semantic shift, either in terms of the general meaning (as above) or in terms of the polarity ( klarA – unklarA (clear – unclear)). Also note that affixes can be either Germanic, e.g., ölen – Ölung (to oil – oiling), or Latin/Greek, e.g., generieren – Generator (to generate – generator). As this analysis shows, derivation in German involves transformation as well as affixation processes, which has to be taken into account when modeling a derivational resource. 3.2 A Rule-based Derivation Model The purpose of a derivational model is to define a set of transformations that correspond to valid derivational word formation rules. Rule-based frameworks offer convenient representations for derivational morphology because they can take advantage of linguistic knowledge about derivation, have interpretable representations, and can be finetuned for high precision. The choice of the framework is in principle arbitrary, as long as it can conveniently express the derivational phenomena of a language. Typically used for this purpose are two-level formalism rules (Karttunen and Beesley, 1992) or XFST replace rules (Beesley and Karttunen, 2003). In this paper, we adopt the modeling framework proposed by Šnajder and Dalbelo Baši´c (2010). The framework corresponds closely to simple, human-readable descriptions in traditional gram2We ignore adverb derivation; the German language distinguishes between adverbial adjectives and adverbs, the latter being a rather unproductive class and thus of no interest for derivation (Schiller et al., 1999). mar books. The expressiveness of the formalism is equivalent to the replacement rules commonly used in finite state frameworks, thus the rules can be compiled into FSTs for efficient processing. The framework makes a clear distinction between inflectional and derivational morphology and provides separate modeling components for these two; we only make use of the derivation modeling component. We use an implementation of the modeling framework in Haskell. For details, see the studies by Šnajder and Dalbelo Baši´c (2008) and Šnajder and Dalbelo Baši´c (2010). The building blocks of the derivational component are derivational rules (patterns) and transformation functions. A derivational rule describes the derivation of a derived word from a basis word. A derivational rule d is defined as a triple: d = (t, P1, P2) (1) where t is the transformation function that maps the word’s stem (or lemma) into the derived word’s stem (or lemma), while P1 and P2 are the sets of inflectional paradigms of the basis word and the derived word, respectively, which specify the morphological properties of the rule’s input and output. For German, our study assumes that inflectional paradigms are combinations of part-of-speech and gender information (for nouns). A transformation function t : S →℘(S) maps strings to a set of strings, representing possible transformations. At the lowest level, t is defined in terms of atomic string replacement operations (replacement of prefixes, suffixes, and infixes). The framework then uses the notion of higher-order functions – functions that take other transformations as arguments and return new transformations as results – to succinctly define common derivational processes such as prefixation, suffixation, and stem change. More complex word-formation rules, such as those combining prefixation and suffixation, can be obtained straightforwardly by functional composition. Table 1 summarizes the syntax we use for transformation functions and shows two example derivational rules. Rule 1 defines an English adjectivization rule. It uses the conditional try operator to apply to nouns with and without the -ion suffix (action – active, instinct – instinctive). Infix replacement is used to model stem alternation, as shown in rule 2 for German nominalization, e.g., vermachtA – VermächtnisN (bequethed – bequest). 1203 Function Description sfx(s) concatenate the suffix s dsfx(s) delete the suffix s aifx(s1, s2) alternate the infix s1 to s2 try(t) perform transformation t, if possible opt(t) optionally perform transformation t uml alternate infixes for an umlaut shift: uml = aifx({(a, ä), (o, ö), (u, ü)}) Examples 1 (EN) sfx(ive) ◦try(dsfx(ion)), N, A  “derive -ive adjectives from nouns potentially ending in -ion” 2 (DE) sfx(nis) ◦try(uml), A, N  “derive -nis nouns from adjectives with optional umlaut creation” Table 1: Transformation functions and exemplary derivational rules in the framework by Šnajder and Dalbelo Baši´c (2010) N and A denote the paradigms for nouns (without gender restriction) and adjectives, respectively. 3.3 Induction of Derivational Families Recall that our goal is to induce derivational families, that is, classes of derivationally related words. We define derivational families on the basis of derivational rules as follows. Given a lemma-paradigm pair (l, p) as input, a single derivational rule d = (t, P1, P2) generates a set of possible derivations Ld(l, p) = {(l1, p1), . . . , (ln, pn)}, where p ∈P1 and pi ∈P2 for all i. Given a set of derivational rules D, we define a binary derivation relation →D between two lemma-paradigm pairs that holds if the second pair can be derived from the first one as: (l1, p1) →D (l2, p2) (2) iff ∃d ∈D. (l2, p2) ∈Ld(l1, p1) Let L denote the set of lemma-paradigm pairs. The set of derivational families defined by D on L is given by the equivalence classes of the transitive, symmetric, and reflexive closure of →D over L. Note that in addition to the quality of the rules, the properties of L plays a central role in the quality of the induced families. High coverage of L is important because the transitivity of →D ranges only over lemmas in L, so low coverage of L may result in fragmented derivational families. However, L should also not contain erroneous lemma-paradigm pairs. The reason is that the derivational rules only define admissible derivations, which need not be morphologically valid, and therefore routinely overgenerate; L plays an important role in filtering out derivations that are not attested in the data. 4 Building the Resource 4.1 Derivational Rules We implemented the derivational rules from Hoeppner (1980) for verbs, nouns, and adjectives, covering all processes described in Section 3.1 (zero derivation, prefixation, suffixation, circumfixation, and stem changes). We found many derivational patterns in German to be conceptually simple (e.g., verb-noun zero derivation) so that substantial coverage can already be achieved with very simple transformation functions. However, there are many more complex patterns (e.g., suffixation combined with optional stem changes) that in sum also affect a considerable number of lemmas, which required us to either implement low-coverage rules or generalize existing rules. In order to preserve precision as much as possible, we restricted rule application by using try instead of opt, and by using gender information from the noun paradigms (for example, some rules only apply to masculine nouns and produce female nouns). As a result, we end up with high-coverage rules, such as derivations of person-denoting nouns (SchuleN – SchülerN (school – pupil)) as well as high-accuracy rules such as negation prefixes (PolN – GegenpolN (pole – antipole)). Even though we did not focus on the explanatory relevance of rules, we found that the underlying modeling formalism, and the methodology used to develop the model, offer substantial linguistic plausibility in practice. We had to resort to heuristics mostly for words with derivational transformations that are motivated by Latin or Greek morphology and do not occur regularly in German, e.g., selegierenV – SelektionN (select – selection). In the initial development phase, we implemented 154 rules, which took about 22 personhours. We then revised the rules with the aim of increasing both precision and recall. To this end, we constructed a development set comprised of a sample of 1,000 derivational families induced using our rules. On this set, we inspected the derivational families for false positives, identified the problematic rules, and identified unused and redundant rules. In order to identify the false negatives, we additionally sampled a list of 1,000 lemmas and used string distance measures (cf. Section 5.1) to retrieve the 10 most similar words for each lemma not 1204 Process N-N N-A N-V A-A A-V V-V Zero derivation – 1 5 – – – Prefixation 10 – 5 5 2 9 + Stem change – – 3 – 1 – Suffixation 15 35 20 1 14 – + Stem change 2 8 7 – 3 1 Circumfixation – – 1 – – – + Stem change – – 1 – – – Stem change – – 7 – – 2 Total 27 44 49 6 20 12 Table 2: Breakdown of derivation rules by category of the basis and the derived word already covered by the derivational families. The refinement process took another 8 person-hours. It revealed three redundant rules and seven missing rules, leading us to a total of 158 rules. Table 2 shows the distribution of rules with respect to the derivational processes they implement and the part of speech combinations for the basis and the derived words. All affixations occur both with and without stem changes, mostly umlaut shifts. Suffixation is by far the most frequently used derivation process, and noun-verb derivation is most diverse in terms of derivational processes. We also estimated the reliability of derivational rules by analyzing the accuracy of each rule on the development set. We assigned each rule a confidence rating on a three-level scale: L3 – very reliable (high-accuracy rules), L2 – generally reliable, and L1 – less reliable (low-accuracy rules). We manually analyzed the correctness of rule applications for 100 derivational families of different size (counting 2 up to 114 lemmas), and assigned 55, 79, and 24 rules to L3, L2 and L1, respectively. 4.2 Data and Preprocessing For an accurate application of nominal derivation rules, we need a lemma list with POS and gender information. We POS-tag and lemmatize SDEWAC, a large German-language web corpus from which boilerplate paragraphs, ungrammatical sentences, and duplicate pages were removed (Faaß et al., 2010). For POS tagging and lemmatization, we use TreeTagger (Schmid, 1994) and determine grammatical gender with the morphological layer of the MATE Tools (Bohnet, 2010). We treat proper nouns like common nouns. We apply three language-specific filtering steps based on observations in Section 3.1. First, we discard non-capitalized nominal lemmas. Second, we deleted verbal lemmas not ending in verb suffixes. Third, we removed frequently occurring erroneous comparative forms of adjectives (usually formed by adding -er, like neuer / newer) by checking for the presence of lemmas without -er (neu / new). An additional complication in German concerns prefix verbs, because prefix is separated in tensed instances. For example, the 3rd person male singular of aufhören (to stop) is er hört auf (he stops). Since most prefixes double as prepositions, the correct lemmas can only be reconstructed by parsing. We parse the corpus using the MST parser (McDonald et al., 2006) and recover prefix verbs by searching for instances of the dependency relation labeled PTKVZ. Since SDEWAC, as a web corpus, still contains errors, we only take into account lemmas that occur three times or more in the corpus. Considering the size of SDEWAC, we consider this as a conservative filtering step that preserves high recall and provides a comprehensive basis for evaluation. After preprocessing and filtering, we run the induction of the derivational families as explained in Section 3 to obtain the DERIVBASE resource. 4.3 Statistics on DERIVBASE The preparation of the SDEWAC corpus as explained in Section 4.2 yields 280,336 lemmas, which we cover with our resource. We induced a total of 239,680 derivational families from this data, with 17,799 non-singletons and 221,881 singletons (most of them due to compound nouns). 11,039 of the families consist of two lemmas, while the biggest contains 116 lemmas (an overgenerated family). The biggest family with perfect precision (i.e., it contains only morphologically related lemmas) contains 40 lemmas, e.g., haltenV , erhaltenV , VerhältnisN (to hold, to uphold, relation), etc. For comparison, CatVar v2.1 contains only 82,676 lemmas in 13,368 non-singleton clusters and 38,604 singletons. The following sample family has seven members across all three POSes and includes prefixation, suffixation, and infix umlaut shifts: taubA (numbA), TaubheitNf (numbnessN ), betäubenV (to anesthetizeV ), BetäubungNf (anesthesiaN ), betäubtA (anesthetizedA), betäubendA (anestheticA), BetäubenNn (act of anesthetizingN ) 1205 5 Evaluation 5.1 Baselines We use two baselines against which we compare the induced derivational families: (1) clusters obtained with the German version of Porter’s stemmer (Porter, 1980)3 and (2) clusters obtained using string distance-based clustering. We have considered a number of string distance measures and tested them on the development set (cf. Section 4.1). The measure proposed by Majumder et al. (2007) turned out to be the most effective in capturing suffixal variation. For words X and Y , it is defined as D4(X, Y ) = n −m + 1 n + 1 n X i=m 1 2i−m (3) where m is the position of left-most character mismatch, and n + 1 is the length of the longer of the two strings. To capture prefixal variation and stem changes, we use the n-gram based measure proposed by Adamson and Boreham (1974): Dicen(X, Y ) = 1 − 2c x + y (4) where x and y are the total number of distinct ngrams in X and Y , respectively, and c is the number of distinct n-grams shared by both words. In our experiments, the best performance was achieved with n = 3. We used hierarchical agglomerative clustering with average linkage. To reduce the computational complexity, we performed a preclustering step by recursively partitioning the set of lemmas sharing the same prefix into partitions of manageable size (1000 lemmas). Initially, we set the number of clusters to be roughly equal to the number of induced derivational families. For the final evaluation, we optimized the number of clusters based on F1 score on calibration and validation sets (cf. Section 5.3). 5.2 Evaluation Methodology The induction of derivational families could be evaluated globally as a clustering problem. Unfortunately, cluster evaluation is a non-trivial task for which there is no consensus on the best approach (Amigó et al., 2009). We decided to perform our evaluation at the level of pairs: we manually judge for a set of pairs whether they are derivationally related or not. 3http://snowball.tartarus.org We obtain the gold standard for this evaluation by sampling lemmas from the lemma list. With random sampling, the evaluation would be unrealistic because a vast majority of pairs would be derivationally unrelated and count as true negatives in our analysis. Moreover, in order to reliably estimate the overall precision of the obtained derivational families, we need to evaluate on pairs sampled from these families. On the other hand, in order to assess recall, we need to sample from pairs that are not included in our derivational families. To obtain reliable estimates of both precision and recall, we decided to draw two different samples: (1) a sample of lemma pairs sampled from the induced derivational families, on which we estimate precision (P-sample) and (2) a sample of lemma pairs sampled from the set of possibly derivationally related lemma pairs, on which we estimate recall (R-sample). In both cases, pairs (l1, l2) are sampled in two steps: first a lemma l1 is drawn from a non-singleton family, then the second lemma l2 is drawn from the derivational family of l1 (P-sample) or the set of lemmas possibly related to l1 (R-sample). The set of possibly related lemmas is a union of the derivational family of l1, the clusters of l1 obtained with the baseline methods, and k lemmas most similar to l1 according to the two string distance measures. We use k = 7 in our experiments. This is based on preliminary experiments on the development set (cf. Section 4.1), which showed that k = 7 retrieves about 92% of the related lemmas retrieved for k = 20 with a much smaller number of true negatives. Thus, the evaluation on the R-sample might overestimate the recall, but only slightly so, while the P-sample yields a reliable estimate of precision by reducing the number of true negatives in the sample. Both samples contain 2400 lemma pairs each. Lemmas included in the development set (Section 4.1) were excluded from sampling. 5.3 Gold Standard Annotation Two German native speakers annotated the pairs from the P-sample and R-samples. We defined five categories into which all lemma pairs are classified as shown in Table 3. We count R and M as positives and N, C, L as negatives (cf. Section 3).4 Note that this binary distinction would be sufficient to compute recall and precision. However, the more 4Ambiguous lemmas are categorized as positive (R or M) if there is a matching sense. 1206 Label Description Example R l1 and l2 are morphologically and semantically related kratzigA – verkratztA (scratchy – scuffed) M l1 and l2 are morphologically but not semantically related bombenV – bombigA (to bomb – smashing) N no morphological relation belebtA – lobenV (lively – to praise) C no derivational relation, but the pair is compositionally related FilmendeN – filmenV (end of film – to film) L not a valid lemma (mislemmatization, wrong gender, foreign words) HaufeN – HäufungN (N/A – accumulation) Table 3: Categories for lemma pair classification Agreement Cohen’s κ R-sample 0.85 0.79 P-sample 0.86 0.70 Table 4: Inter-annotator agreement on validation sample fine-grained five-class annotation scheme provides a more detailed picture. The separation between R and M gives a deeper insight into the semantics of the derivational families. Distinguishing between C and N, in turn, allows us to identify the pairs that are derivationally unrelated, but compositionally related, e.g., EhemannN – EhefrauN (husband – wife). We first carried out a calibration phase in which the annotators double-annotated 200 pairs from each of the two samples and refined the annotation guidelines. In a subsequent validation phase, we computed inter-annotator agreements on the annotations of another 200 pairs each from the P- and the R-samples. Table 4 shows the proportion of identical annotations by both annotators as well as Cohen’s κ score (Cohen, 1968). We achieve substantial agreement for κ (Carletta, 1996). On the P-sample, κ is a little lower because the distribution of the categories is skewed towards R, which makes an agreement by chance more probable. In our opinion, the IAA results were sufficiently high to switch to single annotation for the production phase. Here, each annotator annotated another 1000 pairs from the P-sample and R-sample so that the final test set consists of 2000 pairs from each sample. The P-sample contains 1663 positive (R+M) and 337 negative (N+C+L) pairs, respectively, the R-sample contains 575 positive and 1425 negative pairs. As expected, there are more positive Precision Recall Method P-sample R-sample DERIVBASE (initial) 0.83 0.58 DERIVBASE-L123 0.83 0.71 DERIVBASE-L23 0.88 0.61 DERIVBASE-L3 0.93 0.35 R-sample Stemming 0.66 0.07 String distance D4 0.36 0.20 String distance Dice3 0.23 0.23 Table 5: Precision and recall on test samples pairs in the P-sample and more negative pairs in the R-sample. 6 Results 6.1 Quantitative Evaluation Table 5 presents the overall results. We evaluate four variants of the induced derivational families: those obtained before rule refinement (DERIVBASE initial), and three variants after rule refinement: using all rules (DERIVBASE-L123), excluding the least reliable rules (DERIVBASEL23), and using only highly reliable rules (DERIVBASE-L3). We measure the precision of our method on the P-sample and recall on the R-sample. For the baselines, precision was also computed on the R-sample (computing it on P-sample, which is obtained from the induced derivational families, would severely underestimate the number of false positives). We omit the F1 score because its use for precision and recall estimates from different samples is unclear. DERIVBASE reaches 83% precision when using all rules and 93% precision when using only highly reliable rules. DERIVBASE-L123 achieves the highest recall, outperforming other methods and variants by a large margin. Refinement of the initial model has produced a significant improvement in recall without losses in precision. The baselines perform worse than our method: the stemmer we use is rather conservative, which fragments the families and leads to a very low recall. The string distance-based approaches achieve more balanced precision and recall scores. Note that for these methods, precision and recall can be traded off against each other by varying the number of clusters; we chose the number of clusters by optimizing the F1 score on the calibration and validaton sets. All subsequent analyses refer to DERIVBASE1207 Accuracy Coverage High Low Total High 18 – 18 Low 53 21 74 Total 71 21 92 Table 6: Proportions of accuracy and coverage for direct derivations (measured on P-sample) P R P R N-N 0.78 0.68 N-A 0.89 0.83 A-A 0.87 0.70 N-V 0.79 0.68 V-V 0.55 0.24 A-V 0.88 0.73 Table 7: Precision and recall across different part of speech (first POS: basis; second POS: derived word) L123, which is the model with the highest recall. If optimal precision is required, DERIVBASE-L3 should however be preferred. Analysis by frequency. We cross-classified our rules according to high/low accuracy and high/low coverage based on the pairs in the P-sample. We only considered directly derivationally related (→D) pairs and defined “high accuracy” and “high coverage” as all rules above the 25th percentile in terms of accuracy and coverage, respectively. The results are shown in Table 6: all high-coverage rules are also highly accurate. Most rules are accurate but infrequent. Only 21 rules have a low accuracy, but all of them apply infrequently. Analysis by parts of speech. Table 7 shows precision and recall values for different part of speech combinations for the basis and derived words. High precision and recall are achieved for N-A derivations. The recall is lowest for V-V derivations, suggesting that the derivational phenomena for this POS combination are not yet covered satisfactorily. 6.2 Error analysis Table 8 shows the frequencies of true positives and false positives on the P-sample and false negatives on the R-sample for each annotated category. True negatives are not reported, since their analysis gives no deeper insight. True positives. In our analysis we treated both R and M pairs as related, but it is interesting to see how many of the true positives are in fact semantically unrelated. Out of 1,663 pairs, 90% are semantically as well as morphologically related (R), e.g., TPs FPs FNs Label P-sample P-sample R-sample R 1,492 – 107 M 171 – 60 N – 216 – C – 7 – L – 114 – Total 1,663 337 167 Table 8: Predictions over annotated categories alkoholisierenV – antialkoholischA (to alcoholize – nonalcoholic), BeschuldigungN – unschuldigA (accusation – innocent). Most R pairs result from high-accuracy rules, i.e., zero derivation, negation prefixation and simple suffixation. The remaining 10% are only morphologically related (M), e.g., beschwingtA – schwingenV (cheerful – to swing), StolzierenN – stolzA (strut – proud). In both pairs, the two lemmas share a common semantic concept – i.e., being in motion or being proud – but nowaday’s meanings have grown apart from each other. Among the M true positives, we observe prefixation derivations in 66% of the cases, often involving prefixation at both lemmas, e.g., ErdenklicheN – bedenklichA (imaginable – questionable). False positives. We observe many errors in pairs involving short lemmas, e.g., GenN – genierenV (gene – to be embarrassed), where orthographic context is unsufficient to reject the derivation. About 64% of the 337 incorrect pairs are of class N (unrelated lemmas). For example, the rule for deriving nouns denoting a male person incorrectly links MorseN – MörserN (Morse – mortar). Transitively applied rules often produce incorrect pairs; e.g., SpeicheN – speicherbarA (spoke – storable) results from the rule chain SpeicheN →SpeicherN →speichernV →speicherbarA (spoke →storage →to store →storable). Chains that involve ablaut shifts (cf. Section 3.1) can lead to surprising results, e.g., ErringungN – rangiertA (achievement – shunted). Meanwhile, some pairs judged as unrelated by the annotators might conceivably be weakly related, such as schlürfenV and schlurfenV (to sip – to shuffle), both of which refer to specific long drawn out sounds. About 20% out of these unrelated lemma pairs is due to derivations between proper nouns (PNs) and common nouns. This happens especially for short PNs (cf. the above example of Morse). However, since PNs also participate in valid derivations (e.g., Chaplin – chaplinesque), 1208 one could investigate their impact on derivations rather than omitting them. Errors of the category L – 34% of the false positives – are caused during preprocessing by the lemmatizer. They cannot be blamed on our derivational model, but of course form part of the output. False negatives. Errors of this type are due to missing derivation rules, erroneous rules that leave some lemmas undiscovered, or the absence of lemmas in the corpus required for transitive closure. About 64% of the 167 missed pairs are of category R. About half of these pairs result from a lack of prefixation rules – mainly affecting verbs – with a wide variety of prefixes (zu-, um-, etc.), including prepositional prefixes like herum- (around) or über(over). We intentionally ignored these derivations, since they frequently lead to semantically unrelated pairs. In fact, merely five of the remaining 36% false negative pairs (M) do not involve prefixation. However, this analysis as well as the rather low coverage for verb-involved rules (cf. Table 7) shows that DERIVBASE might benefit from more prefix rules. Apart from the lack of prefixation coverage and a few other, rather infrequent rules, we did not find any substantial deficits. Most of the remaining errors are due to German idiosyncrasies and exceptional derivations, e.g., fahrenV – FahrtN (drive – trip), where the regular zero derivation would result in Fahr. 7 Conclusion and Future Work In this paper, we present DERIVBASE, a derivational resource for German based on a rule-based framework. A few work days were enough to build the underlying rules with the aid of grammar textbooks. We collected derivational families for over 280,000 lemmas with high accuracy as well as solid coverage. The resource is freely available.5 Our approach for compiling a derivational resource is not restricted to German. In addition to the typologically most similar Germanic and Romance languages, it is also applicable to agglutinative languages like Finnish, or other fusional languages like Russian. Its main requirements are a large list of lemmas for the language (optionally with further morphological features) and linguistic literature on morphological patterns. We have employed an evaluation method that uses two separate samples to assess precision and 5http://goo.gl/7KG2U; license cc-by-sa 3.0 recall to deal with the high number of false negatives. Our analyses indicate two interesting directions for future work: (a) specific handling of proper nouns, which partake in specific derivations; and (b) the use of graph clustering instead of the transitive closure to avoid errors resulting from long transitive chains. Finally, we plan to employ distributional semantics methods (Turney and Pantel, 2010) to help remove semantically unrelated pairs as well as distinguish automatically between only morphologically (M) or both morphologically and semantically (R) related pairs. Last, but not least, this allows us to group derivation rules according to their semantic properties. For example, nouns with -er suffixes often denote persons and are agentivizations of a basis word (Bilgin et al., 2004). Acknowledgments The first and third authors were supported by the EC project EXCITEMENT (FP7 ICT-287923). The second author was supported by the Croatian Science Foundation (project 02.03/162: “Derivational Semantic Models for Information Retrieval”). We thank the reviewers for their constructive comments. References George W. Adamson and Jillian Boreham. 1974. The use of an association measure based on character structure to identify semantically related pairs of words and document titles. Information Processing and Management, 10(7/8):253–260. Enrique Amigó, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information Retrieval, 12(4):461–486. Gerhard Augst. 1975. Lexikon zur Wortbildung. Forschungsberichte des Instituts für Deutsche Sprache. Narr, Tübingen. Harald R. Baayen, Richard Piepenbrock, and Leon Gulikers. 1996. The CELEX Lexical Database. Release 2. LDC96L14. Linguistic Data Consortium, University of Pennsylvania, Philadelphia, PA. Kenneth R Beesley and Lauri Karttunen. 2003. Finite state morphology, volume 18. CSLI publications Stanford. Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2012. Learning entailment relations by global graph structure optimization. Computational Linguistics, 38(1):73–111. 1209 Balthazar Bickel and Johanna Nichols. 2001. Inflectional morphology. In Timothy Shopen, editor, Language Typology and Syntactic Description, Volume III: Grammatical categories and the lexicon, pages 169–240. CUP, Cambridge. Orhan Bilgin, Özlem Çetino˘glu, and Kemal Oflazer. 2004. Morphosemantic relations in and across Wordnets. In Proceedings of the Global WordNet Conference, pages 60–66, Brno, Czech Republic. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 89–97, Beijing, China. Eric Brill. 1992. A simple rule-based part of speech tagger. In Proceedings of the Workshop on Speech and Natural Language, pages 112–116, Harriman, New York. Jean C. Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational Linguistics, 22(2):249–254. Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70:213–220. Béatrice Daille, Cécile Fabre, and Pascale Sébillot. 2002. Applications of computational morphology. In Paul Boucher, editor, Many Morphologies, pages 210–234. Cascadilla Press. Hervé Déjean. 1998. Morphemes as necessary concept for structures discovery from untagged corpora. In Proceedings of the Joint Conferences on New Methods in Language Processing and Computational Natural Language Learning, pages 295–298, Sydney, Australia. Gertrud Faaß, Ulrich Heid, and Helmut Schmid. 2010. Design and application of a gold standard for morphological analysis: SMOR in validation. In Proceedings of the Seventh International Conference on Language Resources and Evaluation, pages 803– 810. Christiane Fellbaum, Anne Osherson, and Peter Clark. 2009. Putting semantics into WordNet’s "morphosemantic" links. In Proceedings of the Third Language and Technology Conference, pages 350–358, Pozna´n, Poland. Wolfgang Finkler and Günter Neumann. 1988. Morphix - a fast realization of a classification-based approach to morphology. In Proceedings of 4th Austrian Conference of Artificial Intelligence, pages 11– 19, Vienna, Austria. Arne Fitschen. 2004. Ein computerlinguistisches Lexikon als komplexes System. Ph.D. thesis, IMS, Universität Stuttgart. Éric Gaussier. 1999. Unsupervised learning of derivational morphology from inflectional lexicons. In ACL’99 Workshop Proceedings on Unsupervised Learning in Natural Language Processing, pages 24–30, College Park, Maryland, USA. Rebecca Green, Bonnie J. Dorr, and Philip Resnik. 2004. Inducing frame semantic verb classes from wordnet and ldoce. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 375–382, Barcelona, Spain. Nizar Habash and Bonnie Dorr. 2003. A categorial variation database for English. In Proceedings of the Anuual Meeting of the North American Association for Computational Linguistics, pages 96–102, Edmonton, Canada. Harald Hammarström and Lars Borin. 2011. Unsupervised learning of morphology. Computational Linguistics, 37(2):309–350. Wolfgang Hoeppner. 1980. Derivative Wortbildung der deutschen Gegenwartssprache und ihre algorithmische Analyse. Narr, Tübingen. Lauri Karttunen and Kenneth R Beesley. 1992. Twolevel rule compiler. Xerox Corporation. Palo Alto Research Center. Lauri Karttunen and Kenneth R. Beesley. 2005. Twenty-five years of finite-state morphology. In Antti Arppe, Lauri Carlson, Krister Lindén, Jussi Piitulainen, Mickael Suominen, Martti Vainio, Hanna Westerlund, and Anssi Yli-Jyr, editors, Inquiries into Words, Constraints and Contexts. Festschrift for Kimmo Koskenniemi on his 60th Birthday, pages 71– 83. CSLI Publications, Stanford, California. Kimmo Koskenniemi. 1983. Two-level Morphology: A General Computational Model for Word-Form Recognition and Production. Ph.D. thesis, University of Helsinki. Catherine Macleod, Ralph Grishman, Adam Meyers, Leslie Barrett, and Ruth Reeves. 1998. NOMLEX: A lexicon of nominalizations. In In Proceedings of Euralex98, pages 187–193. Prasenjit Majumder, Mandar Mitra, Swapan K. Parui, Gobinda Kole, Pabitra Mitra, and Kalyankumar Datta. 2007. YASS: Yet another suffix stripper. ACM Transactions on Information Systems, 25(4):18:1–18:20. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a two-stage discriminative parser. In In Proceedings of the Conference on Computational Natural Language Learning, pages 216–220, New York, NY. Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotating noun argument structure for NomBank. In Proceedings of the 4th International Conference on Language Resources and Evaluation, Lisbon, Portugal. 1210 Petar Milin, Victor Kuperman, Aleksandar Kostic, and R Harald Baayen. 2009. Paradigms bit by bit: An information theoretic approach to the processing of paradigmatic structure in inflection and derivation. Analogy in grammar: Form and acquisition, pages 214–252. Karel Pala and Dana Hlaváˇcková. 2007. Derivational relations in Czech WordNet. In Proceedings of the ACL Workshop on Balto-Slavonic Natural Language Processing: Information Extraction and Enabling Technologies, pages 75–81. Maciej Piasecki, Radoslaw Ramocki, and Marek Maziarz. 2012. Recognition of Polish derivational relations based on supervised learning scheme. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, pages 916– 922, Istanbul, Turkey. Martin Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. Anne Schiller, Simone Teufel, Christine Stöckert, and Christine Thielen. 1999. Guidelines für das Tagging deutscher Textcorpora mit STTS. Technical report, Institut fur maschinelle Sprachverarbeitung, Stuttgart. Helmut Schmid, Arne Fitschen, and Ulrich Heid. 2004. Smor: A German computational morphology covering derivation, composition and inflection. In Proceedings of the Fourth International Conference on Language Resources and Evaluation, Lisbon, Portugal. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Language Processing, pages 44–49, Manchester, UK. Patrick Schone and Daniel Jurafsky. 2000. Knowledge-free induction of morphology using latent semantic analysis. In Proceedings of the Conference on Natural Language Learning, pages 67–72, Lisbon, Portugal. Jan Šnajder and Bojana Dalbelo Baši´c. 2008. Higherorder functional representation of Croatian inflectional morphology. In Proceedings of the 6th International Conference on Formal Approaches to South Slavic and Balkan Languages, pages 121–130, Dubrovnik, Croatia. Jan Šnajder and Bojana Dalbelo Baši´c. 2010. A computational model of Croatian derivational morphology. In Proceedings of the 7th International Conference on Formal Approaches to South Slavic and Balkan Languages, pages 109–118, Dubrovnik, Croatia. Idan Szpektor and Ido Dagan. 2008. Learning entailment rules for unary templates. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 849–856, Manchester, UK. Kapil Thadani and Kathleen McKeown. 2011. Towards strict sentence intersection: Decoding and evaluation strategies. In Proceedings of the ACL Workshop on Monolingual Text-To-Text Generation, pages 43–53, Portland, Oregon. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37(1):141–188. Andrea Zielinski and Christian Simon. 2008. Morphisto - an open source morphological analyzer for German. In Proceedings of the 7th International Workshop on Finite-State Methods and Natural Language Processing, pages 224–231, Ispra, Italy. 1211
2013
118
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1212–1221, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Crowdsourcing Interaction Logs to Understand Text Reuse from the Web Martin Potthast Matthias Hagen Michael Völske Benno Stein Bauhaus-Universität Weimar 99421 Weimar, Germany <first name>.<last name>@uni-weimar.de Abstract We report on the construction of the Webis text reuse corpus 2012 for advanced research on text reuse. The corpus compiles manually written documents obtained from a completely controlled, yet representative environment that emulates the web. Each of the 297 documents in the corpus is about one of the 150 topics used at the TREC Web Tracks 2009–2011, thus forming a strong connection with existing evaluation efforts. Writers, hired at the crowdsourcing platform oDesk, had to retrieve sources for a given topic and to reuse text from what they found. Part of the corpus are detailed interaction logs that consistently cover the search for sources as well as the creation of documents. This will allow for in-depth analyses of how text is composed if a writer is at liberty to reuse texts from a third party—a setting which has not been studied so far. In addition, the corpus provides an original resource for the evaluation of text reuse and plagiarism detectors, where currently only less realistic resources are employed. 1 Introduction The web has become one of the most common sources for text reuse. When reusing text from the web, humans may follow a three step approach shown in Figure 1: searching for appropriate sources on a given topic, copying of text from selected sources, modification and paraphrasing of the copied text. A considerable body of research deals with the detection of text reuse, and, in particular, with the detection of cases of plagiarism (i.e., the reuse of text with the intent of disguising the fact that text has been reused). Similarly, a large number of commercial software systems is being developed whose purpose is the detection of plagiarism. Both the developers of these systems as well as researchers working on the subject matter frequently claim their approaches to be searching the entire web or, at least, to be scalable to web size. However, there is hardly any evidence to substantiate this claim—rather the opposite can be observed: commercial plagiarism detectors have not been found to reliably identify plagiarism from the web (Köhler and Weber-Wulff, 2010), and the evaluation of research prototypes even under laboratory conditions shows that there is still a long way to go (Potthast et al., 2010b). We explain the disappointing state of the art by the lack of realistic, large-scale evaluation resources. With our work, we want to contribute to closing the gap. In this regard the paper in hand introduces the Webis text reuse corpus 2012 (Webis-TRC-12), which, for the first time, emulates the entire process of reusing text from the web, both at scale and in a controlled environment. The corpus comprises a number of features that set it apart from previous ones: (1) the topic of each document in the corpus is derived from a topic of the TREC Web Track, and the sources to copy from have been retrieved manually from the ClueWeb corpus. (2) The search for sources is logged, including click-through and browsing data. (3) A fine-grained edit history has been recorded for each document. (4) A total of 297 documents were written with an average length of about 5700 words, whereas diversity is ensured via crowdsourcing. Altogether, this corpus forms the current most realistic sample of writers reusing text. The corpus is publicly available.1 1.1 Related Work As organizers of the annual PAN plagiarism detection competitions,2 we have introduced the first standardized evaluation framework for that pur1http://www.webis.de/research/corpora 2http://pan.webis.de 1212 Search I‘m Feeling Lucky Search Copy & Paste Modification Figure 1: The basic steps of reusing text from the web (Potthast, 2011). pose (Potthast et al., 2010b). Among others, it comprises a series of corpora that consist of automatically generated cases of plagiarism, provided in the form of the PAN plagiarism corpora 2009-2011. The corpora have been used to evaluate dozens of plagiarism detection approaches within the respective competitions in these years;3 but even though they have been adopted by the community, a number of shortcomings render them less realistic: 1. All plagiarism cases were generated by randomly selecting text passages from documents and inserting them at random positions in a host document. This way, the reused passages do not match the topic of the host document. 2. The majority of the reused passages were modified in order to obfuscate the reuse. However, the applied modification strategies, again, are basically random: shuffling, replacing, inserting, or deleting words randomly. An effort was made to avoid non-readable text, yet none of it bears any semantics. 3. The corpus documents are parts of books from the Project Gutenberg. Many of these books are pretty old, whereas today the web is the predominant source for text reuse. To overcome the second issue, about 4 000 passages were rewritten manually via crowdsourcing on Amazon’s Mechanical Turk for the 2011 corpus. But, because of the first issue (random passage insertion), a topic drift analysis can spot a reused passage more easily than a search within the document set containing the original source (Potthast et al., 2011). From these observations it becomes clear that there are limits for the automatic construction of such kinds of corpora. The Webis text reuse corpus 2012 addresses all of the mentioned issues since it has been constructed manually. 3See (Potthast et al., 2009; Potthast et al., 2010a; Potthast et al., 2011) for overviews of approaches and evaluation results of each competition. Besides the PAN corpora, there are two other corpora that comprise “genuinely reused” text: the Clough09 corpus, and the Meter corpus. The former corpus consists of 57 answers to one of five computer science questions that were reused from a respective Wikipedia article (Clough and Stevenson, 2011). While the text was genuinely written by a number of volunteer students, the choice of topics is narrow, and text lengths range from 200 to 300 words, which is hardly more than 2-3 paragraphs. Also, the sources from which text was reused were given up front, so that there is no data about their retrieval. The Meter corpus annotates 445 cases of text reuse among 1 716 news articles (Clough et al., 2002). The cases of text reuse in this corpus are realistic for the news domain; however, they have not been created by the reuse process outlined in Figure 1. Note that in the news domain, text is often reused directly from a news wire without the need for retrieval. Our new corpus complements these two resources. 2 Corpus Construction Two data sets form the basis for constructing our corpus, namely (1) a set of topics to write about and (2) a set of web pages to research about a given topic. With regard to the former, we resort to topics used at TREC, specifically to those used at the Web Tracks 2009–2011. With regard to the latter, we employ the ClueWeb corpus from 20094 (and not the “web in the wild”). The ClueWeb comprises more than one billion documents from ten languages and can be considered as a representative cross-section of the real web. It is a widely accepted resource among researchers and became one of the primary resources to evaluate the retrieval performance of search engines within several TREC tracks. Our corpus’s strong connection to TREC will allow for unforeseen synergies. Based on these decisions our 4http://lemurproject.org/clueweb09 1213 corpus construction steps can be summarized as follows: 1. Rephrasing of the 150 topics used at the TREC Web Tracks 2009–2011 so that they explicitly invite people to write an essay. 2. Indexing of the ClueWeb corpus category A (the entire English portion with about 0.5 billion documents) using the BM25F retrieval model plus additional features. 3. Development of a search interface that allows for answering queries within milliseconds and that is designed along the lines of commercial search interfaces. 4. Development of a browsing API for the ClueWeb, which serves ClueWeb pages on demand and which rewrites links of delivered pages, now pointing to their corresponding ClueWeb pages on our servers (instead of to the originally crawled URL). 5. Recruiting 27 writers, 17 of whom with a professional writing background, hired at the crowdsourcing platform oDesk from a wide range of hourly rates for diversity. 6. Instructing the writers to write one essay at a time of at least 5000 words length (corresponding to an average student’s homework assignment) about an open topic of their choice, using our search engine—hence browsing only ClueWeb pages. 7. Logging all writers’ interactions with the search engine and the ClueWeb on a per-essay basis at our site. 8. Logging all writers’ edits to their essays in a fine-grained edit log: a snapshot was taken whenever a writer stopped writing for more than 300ms. 9. Double-checking all of the essays for quality. After having deployed the search engine and completed various usability tests, the actual corpus construction took nine months, from April 2012 through December 2012. Obviously, the outlined experimental setup can serve different lines of research and is publicly available as well. The remainder of the section presents elements of our setup in greater detail. 2.1 Topic Preparation Since the topics used at the TREC Web Tracks were not amenable for our purpose as is, we rephrased them so that they ask for writing an essay instead of searching for facts. Consider for example topic 001 of the TREC Web Track 2009: Query. obama family tree Description. Find information on President Barack Obama’s family history, including genealogy, national origins, places and dates of birth, etc. Sub-topic 1. Find the TIME magazine photo essay “Barack Obama’s Family Tree.” Sub-topic 2. Where did Barack Obama’s parents and grandparents come from? Sub-topic 3. Find biographical information on Barack Obama’s mother. This topic is rephrased as follows: Obama’s family. Write about President Barack Obama’s family history, including genealogy, national origins, places and dates of birth, etc. Where did Barack Obama’s parents and grandparents come from? Also include a brief biography of Obama’s mother. In the example, Sub-topic 1 is considered too specific for our purposes while the other sub-topics are retained. TREC Web Track topics divide into faceted and ambiguous topics. While topics of the first kind can be directly rephrased into essay topics, from topics of the second kind one of the available interpretations was chosen. 2.2 A Controlled Web Search Environment To give the oDesk writers a familiar search experience while maintaining reproducibility at the same time, we developed a tailored search engine called ChatNoir (Potthast et al., 2012b).5 Besides ours, the only other public search engine for the ClueWeb is Carnegie Mellon’s Indri,6 which, unfortunately, is far from our efficiency requirements. Moreover, its search interface does not follow the standard in terms of result page design, and it does not give access to interaction logs. Our search engine is on the order of milliseconds in terms of retrieval 5http://chatnoir.webis.de 6http://lemurproject.org/clueweb09.php/index.php#Services 1214 time, its interface follows industry standards, and it features an API that allows for user tracking. ChatNoir is based on the BM25F retrieval model (Robertson et al., 2004), uses the anchor text list provided by (Hiemstra and Hauff, 2010), the PageRanks provided by the Carnegie Mellon University alongside the ClueWeb corpus, and the Spam rank list provided by (Cormack et al., 2011). ChatNoir comes with a proximity feature with variable-width buckets as described by (Elsayed et al., 2011). Our choice of retrieval model and ranking features is intended to provide a reasonable baseline performance. However, it is neither near as mature as those of commercial search engines nor does it compete with the best-performing models from TREC. Yet, it is among the most widely accepted models in information retrieval, which underlines our goal of reproducibility. In addition to its retrieval model, ChatNoir implements two search facets: text readability scoring and long text search. The first facet, similar to that provided by Google, scores the readability of a text found on a web page via the well-known FleschKincaid grade level formula (Kincaid et al., 1975): it estimates the number of years of education required in order to understand a given text. This number is mapped onto the three categories “Simple” (up to 5 years), “Intermediate” (between 5 and 9 years) and “Expert” (at least 9 years). The “Long Text” search facet omits search results which do not contain at least one continuous paragraph of text that exceeds 300 words. The two facets can be combined with each other. When clicking on a search result, ChatNoir does not link into the real web but redirects into the ClueWeb. Though the ClueWeb provides the original URLs from which the web pages have been obtained, many of these pages have gone or been updated since. We hence set up an API that serves web pages from the ClueWeb on demand: when accessing a web page, it is pre-processed before being shipped, removing automatic referrers and replacing all links to the real web with links to their counterpart inside the ClueWeb. This way, the ClueWeb can be browsed as if surfing the real web, whereas it becomes possible to track a user. The ClueWeb is stored in the HDFS of our 40 node Hadoop cluster, and web pages are fetched directly from there with latencies of about 200ms. ChatNoir’s inverted index has been optimized to guarantee fast response times, and it is deployed alongside Hadoop on the same cluster. Table 1: Demographics of the 12 Batch 2 writers. Writer Demographics Age Gender Native language(s) Minimum 24 Female 67% English 67% Median 37 Male 33% Filipino 25% Maximum 65 Hindi 17% Academic degree Country of origin Second language(s) Postgraduate 41% UK 25% English 33% Undergraduate 25% Philippines 25% French 17% None 17% USA 17% Afrikaans, Dutch, n/a 17% India 17% German, Spanish, Australia 8% Swedish each 8% South Africa 8% None 8% Years of writing Search engines used Search frequency Minimum 2 Google 92% Daily 83% Median 8 Bing 33% Weekly 8% Standard dev. 6 Yahoo 25% n/a 8% Maximum 20 Others 8% 2.3 Two Batches of Writing In order to not rely only on the retrieval model implemented in our controlled web search environment, we divided the task into two batches, so that two essays had to be written for each of the 150 topics, namely one in each batch. In Batch 1, our writers did not search for sources themselves, but they were provided up front with an average of 20 search results to choose from for each topic. These results were obtained from the TREC Web Track relevance judgments (so-called “qrels”): only documents that were found to be relevant or key documents for a given topic by manual inspection of the NIST assessors were provided to our writers. These documents result from the combined wisdom of all retrieval models of the TREC Web Tracks 2009–2011, and hence can be considered as optimum retrieval results produced by the state of the art in search engine technology. In Batch 2, in order to obtain realistic search interaction logs, our writers were instructed to search for source documents using ChatNoir. 2.4 Crowdsourcing Writers Our ideal writer has experience in writing, is capable of writing about a diversity of topics, can complete a text in a timely manner, possesses decent English writing skills, and is well-versed in using the aforementioned technologies. After bootstrapping our setup with 10 volunteers recruited at our university, it became clear that, because of the workload involved, accomplishing our goals would not be possible with volunteers only. Therefore, we resorted to hiring (semi-)professional writers and made use of the crowdsourcing platform oDesk.7 Crowdsourcing has quickly become one of the 7http://www.odesk.com 1215 Table 2: Key figures of the Webis text reuse corpus 2012. Corpus Distribution Total characteristic min avg max stdev Writers (Batch 1+2) 27 Essays (Topics) (Two essays per topic) 297 (150) Essays / Writer 1 2 66 15.9 Queries (Batch 2) 13 655 Queries / Essay 4 91.0 616 83.1 Clicks (Batch 2) 16 739 Clicks / Essay 12 111.6 443 80.3 Clicks / Query 1 2.3 76 3.3 Irrelevant (Batch 2) 5 962 Irrelevant / Essay 1 39.8 182 28.7 Irrelevant / Query 0 0.5 60 1.4 Relevant (Batch 2) 251 Relevant / Essay 0 1.7 7 1.5 Relevant / Query 0 0.0 4 0.2 Key (Batch 2) 1 937 Key / Essay 1 12.9 46 7.5 Key / Query 0 0.2 22 0.7 Corpus Distribution Total characteristic min avg max stdev Search Sessions (Batch 2) 931 Sessions / Essay 1 12.3 149 18.9 Days (Batch 2) 201 Days / Essay 1 4.9 17 2.7 Hours (Batch 2) 2 068 Hours / Writer 3 129.3 679 167.3 Hours / Essay 3 7.5 10 2.5 Edits (Batch 1+2) 633 334 Edits / Essay 45 2 132.4 6 975 1 444.9 Edits / Day 5 2 959.5 8 653 1 762.5 Words (Batch 1+2) 1 704 354 Words / Essay 260 5 738.8 15 851 1 604.3 Words / Writer 2 078 63 124.2 373 975 89 246.7 Sources (Batch 1+2) 4 582 Sources / Essay 0 15.4 69 10.0 Sources / Writer 5 169.7 1 065 269.6 cornerstones for constructing evaluation corpora, which is especially true for paid crowdsourcing. Compared to Amazon’s Mechanical Turk (Barr and Cabrera, 2006), which is used more frequently than oDesk, there are virtually no workers at oDesk submitting fake results because of its advanced rating features for workers and employers. Moreover, oDesk tracks their workers by randomly taking screenshots, which are provided to employers in order to check whether the hours logged correspond to work-related activity. This allowed us to check whether our writers used our environment instead of other search engines and editors. During Batch 2, we have conducted a survey among the twelve writers who worked for us at that time. Table 1 gives an overview of the demographics of these writers, based on a questionnaire and their resumes at oDesk. Most of them come from an English-speaking country, and almost all of them speak more than one language, which suggests a reasonably good education. Two thirds of the writers are female, and all of them have years of writing experience. Hourly wages were negotiated individually and range from 3 to 34 US dollars (dependent on skill and country of residence), with an average of about 12 US dollars. For ethical reasons, we payed at least the minimum wage of the respective countries involved. In total, we spent 20 468 US dollars to pay the writers—an amount that may be considered large compared to other scientific crowdsourcing efforts from the literature, but small in terms of the potential of crowdsourcing to make a difference in empirical science. 3 Corpus Analysis This section presents selected results of a preliminary corpus analysis. We overview the data and shed some light onto the search and writing behavior of writers. 3.1 Corpus Statistics Table 2 shows key figures of the collected interaction logs, including the absolute numbers of queries, relevance judgments, working times, number of edits, words, and retrieved sources, as well as their relation to essays, writers, and work time, where applicable. On average, each writer wrote 2 essays while the standard deviation is 15.9, since one very prolific writer managed to write 66 essays. From a total of 13 655 queries submitted by the writers within Batch 2, each essay got an average of 91 queries. The average number of results clicked per query is 2.3. For comparison, we computed the average number of clicks per query in the AOL query log (Pass et al., 2006), which is 2.0. In this regard, the behavior of our writers on individual queries does not differ much from that of the average AOL user in 2006. Most of the clicks that we recorded are search result clicks, whereas 2 457 of them are browsing clicks on web page links. Among the browsing clicks, 11.3% are clicks on links that point to the same web page (i.e., anchor links using the hash part of a URL). The longest click trail contains 51 unique web pages, but most trails are very short. This is a surprising result, since we expected a larger proportion of browsing clicks, but it also shows that our writers 1216 relied heavily on the ChatNoir’s ranking. Regarding search facets, we observed that our writers used them only for about 7% of their queries. In these cases, the writers used either the “Long Text” facet, which retrieves web pages containing at least one continuous passage of at least 300 words, or set the desired reading level to “Expert.” The query log of each writer in Batch 2 divides into 931 search sessions with an average of 12.3 sessions per topic. Here, a session is defined as a sequence of queries recorded for a given topic which is not divided by a break longer than 30 minutes. Despite other claims in the literature (Jones and Klinkner, 2008; Hagen et al., 2013) we argue that, in our case, sessions can be reliably identified by timeouts because we have a priori knowledge about which query belongs to which essay. Typically, completing an essay took 4.9 days, which includes to a long-lasting exploration of the topic at hand. The 297 essays submitted within the two batches were written with a total of 633 334 edits. Each topic was edited 2 132 times on average, whereas the standard deviation gives an idea about how diverse the modifications of the reused text were. Writers were not specifically instructed to modify a text as much as possible—rather they were encouraged to paraphrase in order to foreclose the detection by an automatic text reuse detector. This way, our corpus captures each writer’s idea of the necessary modification effort to accomplish this goal. The average lengths of the essays is 5 739 words, but there are also some short essays if hardly any useful information could be found on the respective topics. About 15 sources have been reused in each essay, whereas some writers reused text from as many as 69 unique documents. 3.2 Relevance Judgments In the essays from Batch 2, writers reused texts from web pages they found during their search. This forms an interesting relevance signal which allows us to separate web pages relevant to a given topic from those which are irrelevant. Following the terminology of TREC, we consider web pages from which text is reused as key documents for the respective essay’s topic, while web pages that are on a click trail leading to a key document are termed relevant. The unusually high number of key documents compared to relevant documents is explained by the fact that there are only few click trails of this kind, whereas most web pages Table 3: Confusion matrix of TREC judgments versus writer judgments. TREC Writer judgment judgment irrelevant relevant key unjudged spam (-2) 3 0 1 2 446 spam (-1) 64 4 18 16 657 irrelevant (0) 219 13 73 33 567 relevant (1) 114 8 91 10 676 relevant (2) 44 5 56 3 711 key (3) 12 0 8 526 unjudged 5 506 221 1 690 – have been retrieved directly. The remainder of web pages that were viewed but discarded by our writers are considered as irrelevant. Each year, the NIST assessors employed for the TREC conference manually review hundreds of web pages that have been retrieved by experimental retrieval systems that are submitted to the various TREC tracks. This was also the case for the TREC Web Tracks from which the topics of our corpus are derived. We have compared the relevance judgments provided by TREC for these tracks with the implicit judgments from our writers. Table 3 contrasts the two judgment scales in the form of a confusion matrix. TREC uses a six-point Likert scale ranging from -2 (extreme Spam) to 3 (key document). For 733 of the documents visited by our writers, TREC relevance judgments can be found. From these, 456 documents (62%) have been considered irrelevant for the purposes of reuse by our writers, however, the TREC assessor disagree with this judgment in 170 cases. Regarding the documents considered as key documents for reuse by our writers, the TREC assessors disagree on 92 of the 247 documents. An explanation for the disagreement can be found in the differences between the TREC ad hoc search task and our text reuse task: the information nuggets (small chunks of text) that satisfy specific factual information needs from the original TREC topics are not the same as the information “ingots” (big chunks of text) that satisfy our writers’ needs. 3.3 Research Behavior To analyze the writers’ search behavior during essay writing in Batch 2, we have recorded detailed search logs of their queries while they used our search engine. Figure 2 shows for each of the 150 essays of this batch a curve of the percentage of queries at times between a writer’s first query and an essay’s completion. We have normalized the time axis and excluded working breaks of more 1217 165 33 20 16 158 210 58 70 113 23 18 119 28 27 196 23 347 109 248 40 148 153 113 154 319 64 26 30 18 208 35 24 34 114 284 46 52 60 52 48 66 97 50 138 36 42 34 70 34 57 120 616 74 101 62 32 69 106 4 136 28 108 98 47 46 10 55 50 88 48 198 94 218 48 198 112 76 20 147 170 139 56 106 323 70 60 74 104 51 42 301 111 69 44 150 274 48 92 155 99 241 58 84 181 40 135 46 118 185 14 29 133 61 17 23 78 24 66 80 33 68 12 162 60 76 62 108 22 42 42 75 69 18 147 208 30 24 173 16 16 52 26 36 9 60 8 64 42 74 64 A F E D C B 1 5 10 15 20 25 Figure 2: Spectrum of writer search behavior. Each grid cell corresponds to one of the 150 essays of Batch 2 and shows a curve of the percentage of submitted queries (y-axis) at times between the first query until the essay was finished (x-axis). The numbers denote the amount of queries submitted. The cells are sorted by area under the curve, from the smallest area in cell A1 to the largest area in cell F25. than five minutes. The curves are organized so as to highlight the spectrum of different search behaviors we have observed: in row A, 70-90% of the queries are submitted toward the end of the writing task, whereas in row F almost all queries are submitted at the beginning. In between, however, sets of queries are often submitted in the form of “bursts,” followed by extended periods of writing, which can be inferred from the steps in the curves (e.g., cell C12). Only in some cases (e.g., cell C10) a linear increase of queries over time can be observed for a non-trivial amount of queries, which indicates continuous switching between searching and writing. From these observations, it can be inferred that our writers sometimes conducted a “first fit” search and reused the first texts they found easily. However, as the essay progressed and the low hanging fruit in terms of search were used up, they had to search more intensively in order to complete their essay. More generally, this data gives an idea of how humans perform exploratory search in order to learn about a given topic. Our current research on this aspect focuses on the prediction of search mission types, since we observe that the search mission type does not simply depend on the writer or the perceived topic difficulty. 3.4 Visualizing Edit Histories To analyze the writers’ writing style, that is to say, how writers reuse texts and how the essay is completed in both batches, we have recorded the edit logs of their essays. Whenever a writer stopped writing for more than 300ms, a new edit was stored in a version control system at our site. The edit logs document the entire text evolution, from first the keystroke until an essay was completed. We have used the so-called history flow visualization to analyze the writing process (Viégas et al., 2004). Figure 3 shows four examples from the set of 297 essays. Based on these visualizations, a number of observations can be made. In general, we identify two distinct writing-style types to perform text reuse, namely to build up an essay during writing, or, to first gather material and then to boil down a text until the essay is completed. Later in this section, we will analyze this observation in greater detail. Within the plots, a number of events can be spotted that occurred during writing: in the top left plot, encircled as area A, the insertion of a new piece of text can be observed. Though marked as original text at first, the writer worked on this passage and then revealed that it was reused from another source. At area B in the top right plot, one can observe the reorganization of two passages as they exchange places from one edit to another. Area C in the bottom right plot shows that the writer, shortly before completing this essay, reorganized substantial parts. Area D in the same plot shows how the writer went about boiling down the text by incorporating contents from different passages that have been collected beforehand and, then, from one edit to another, discarded most of the rest. The saw-tooth shaped pattern in area E in the bottom left plot reveals that, even though the writer of this essay adopts a build-up style, she still pastes passages from her sources into the text one at a time, and then individually boils down each. Our visualizations also include information about the text positions where writers have been working at a given point in time; these positions are shown as blue dots in the plots. In this regard distinct writing patterns are discernible of writers who go through a text linearly versus those who do not. Future work will include an analysis of these writing patterns. 1218 A B C D E Figure 3: Types of text reuse: build-up reuse (left) versus boil-down reuse (right). Each plot shows the text length at text edit between first keystroke and essay completion; edits have been recorded during writing whenever a writer stopped for more than 300ms. Colors encode different source documents. Original text is white; blue dots indicate the text position of the writer’s last edit. 3.5 Build-up Reuse versus Boil-down Reuse Based on the edit history visualizations, we have manually classified the 297 essays of both batches into two categories, corresponding to the two styles build-up reuse and boil-down reuse. We found that 40% are instances of build-up reuse, 45% are instances of boil-down reuse, and 13% fall in between, excluding 2% of the essays as outliers due to errors or for being too short. The in-between cases show that a writer actually started one way and then switched to the respective other style of reuse so that the resulting essays could not be attributed to a single category. An important question that arises out of this observation is whether different writers habitually exert different reuse styles or whether they apply them at random. To obtain a better overview, we envision the applied reuse style of an essay by the skyline curve of its edit history visualization (i.e., by the curve that plots the length of an essay after each edit). Aggregating these curves on a per-writer basis reveals distinct Table 4: Contingency table: writers over reuse style. Reuse Writer ID Style A02A05A06A07A10A17A18A19A20A21A24 build-up 4 27 11 4 9 13 12 4 9 18 2 boil-down 52 5 0 14 2 13 11 3 0 0 24 mixed 10 3 0 1 1 7 6 0 0 3 1 patterns. For eight of our writers Figure 4 shows this characteristic. The plots are ordered by the shape of the averaged curve, starting from a linear increase (left) to a compound of steep increase to a certain length after which the curve levels out (right). The former shape corresponds to writers who typically apply build-up reuse, while the latter can be attributed to writers who typically apply boil-down reuse. When comparing the plots we notice a very interesting effect: it appears that writers who conduct boil-down reuse vary more wildly in their behavior. The reuse style of some writers, however, falls in between the two extremes. Besides the visual analysis, Table 4 shows the distribution of reuse styles 1219 Text length (%) Text length (%) A10 (12 essays) A18 (32 essays) A24 (27 essays) A21 (21 essays) A06 (12 essays) A17 (33 essays) A02 (66 essays) A05 (37 essays) Edits (%) Edits (%) Edits (%) Edits (%) build up boil down Text reuse style Figure 4: Text reuse styles ranging from build-up reuse (left) to boil-down reuse (right). A gray curve shows the normalized length of an essay over the edits that went into it during writing. Curves are grouped by writers. The black curve marks the average of all other curves in a plot. for the eleven writers who contributed at least five essays. Most writers use one style for about 80% of their essays, whereas two writers (A17, A18) are exactly on par between the two styles. Based on Pearson’s chi-squared test, one can safely reject the null hypothesis that writers and text reuse styles are independent: χ2 = 139.0 with p = 7.8 · 10−20. Since our sample of authors and essays is sparse, Pearson’s chi-squared test may not be perfectly suited which is why we have also applied Fisher’s exact test, which computes probability p = 0.0005 that the null hypothesis is true. 4 Summary and Outlook This paper details the construction of the Webis text reuse corpus 2012 (Webis-TRC-12), a new corpus for text reuse research that has been created entirely manually on a large scale. We have recorded consistent interaction logs of human writers with a search engine as well as with the used text processor; these logs serve the purpose of studying how texts from the web are being reused for essay writing. Our setup is entirely reproducible: we have built a static web search environment consisting of a search engine along with a means to browse a large corpus of web pages as if it were the “real” web. Yet, in terms of scale, this environment is representative of the real web. Besides our corpus also this infrastructure is available to other researchers. The corpus itself goes beyond existing resources in that it allows for a much more fine-grained analysis of text reuse, and in that it significantly improves the realism of the data underlying evaluations of automatic tools to detect text reuse and plagiarism. Our analysis gives an overview of selected aspects of the new corpus. This includes corpus statistics about important variables, but also exploratory studies of search behaviors and strategies for reusing text. We present new insights about how text is composed, revealing two types of writers: those who build up a text as they go, and those who first collect a lot of material which then is boiled down until the essay is finished. Parts of our corpus have been successfully employed to evaluate plagiarism detectors in the PAN plagiarism detection competition 2012 (Potthast et al., 2012a). Future work will include analyses that may help to understand the state of mind of writers when reusing text as well as of plagiarists. We also expect insights with regard to the development of algorithms for detection purposes and for linguists studying the process of writing. Acknowledgements We thank our writers at oDesk and all volunteers for their contribution. We also thank Jan Graßegger and Martin Tippmann who kept the search engine up and running during corpus construction. 1220 References Jeff Barr and Luis Felipe Cabrera. 2006. AI gets a brain. Queue, 4(4):24–29. Paul Clough and Mark Stevenson. 2011. Developing a corpus of plagiarised short answers. Language Resources and Evaluation, 45:5–24. Paul Clough, Robert Gaizauskas, Scott S. L. Piao, and Yorick Wilks. 2002. METER: MEasuring TExt Reuse. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), Philadelphia, PA, USA, July 6–12, 2002, pages 152– 159. Gordon V. Cormack, Mark D. Smucker, and Charles L. A. Clarke. 2011. Efficient and effective spam filtering and re-ranking for large web datasets. Information Retrieval, 14(5):441–465. Tamer Elsayed, Jimmy J. Lin, and Donald Metzler. 2011. When close enough is good enough: approximate positional indexes for efficient ranked retrieval. In Proceedings of the 20th ACM Conference on Information and Knowledge Management (CIKM 2011), Glasgow, United Kingdom, October 24–28, 2011, pages 1993–1996. Matthias Hagen, Jakob Gomoll, Anna Beyer, and Benno Stein. 2013. From Search Session Detection to Search Mission Detection. In Proceedings of the 10th International Conference Open Research Areas in Information Retrieval (OAIR 2013), Lisbon, Portugal, May 22–24, 2013, to appear. Djoerd Hiemstra and Claudia Hauff. 2010. MIREX: MapReduce information retrieval experiments. Technical Report TR-CTIT-10-15, University of Twente. Rosie Jones and Kristina Lisa Klinkner. 2008. Beyond the session timeout: automatic hierarchical segmentation of search topics in query logs. In Proceedings of the 17th ACM Conference on Information and Knowledge Management (CIKM 2008), Napa Valley, California, USA, October 26–30, 2008, pages 699– 708. J. Peter Kincaid, Robert P. Fishburne, Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of new readability formulas (automated readability index, Fog count and Flesch reading ease formula) for Navy enlisted personnel. Research Branch Report 8-75, Naval Air Station Memphis, Millington, TN. Katrin Köhler and Debora Weber-Wulff. 2010. Plagiarism detection test 2010. http://plagiat. htw-berlin.de/wp-content/uploads/ PlagiarismDetectionTest2010-final.pdf. Greg Pass, Abdur Chowdhury, and Cayley Torgeson. 2006. A picture of search. In Proceedings of the 1st International Conference on Scalable Information Systems (Infoscale 2006), Hong Kong, May 30–June 1, 2006, paper 1. Martin Potthast, Benno Stein, Andreas Eiselt, Alberto Barrón-Cedeño, and Paolo Rosso. 2009. Overview of the 1st international competition on plagiarism detection. In SEPLN 2009 Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse (PAN 2009), pages 1–9. Martin Potthast, Alberto Barrón-Cedeño, Andreas Eiselt, Benno Stein, and Paolo Rosso. 2010a. Overview of the 2nd international competition on plagiarism detection. In Working Notes Papers of the CLEF 2010 Evaluation Labs. Martin Potthast, Benno Stein, Alberto Barrón-Cedeño, and Paolo Rosso. 2010b. An evaluation framework for plagiarism detection. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), Beijing, China, August 23–27, 2010, pages 997–1005. Martin Potthast, Andreas Eiselt, Alberto BarrónCedeño, Benno Stein, and Paolo Rosso. 2011. Overview of the 3rd international competition on plagiarism detection. In Working Notes Papers of the CLEF 2011 Evaluation Labs. Martin Potthast, Tim Gollub, Matthias Hagen, Jan Graßegger, Johannes Kiesel, Maximilian Michel, Arnd Oberländer, Martin Tippmann, Alberto BarrónCedeño, Parth Gupta, Paolo Rosso, and Benno Stein. 2012a. Overview of the 4th international competition on plagiarism detection. In Working Notes Papers of the CLEF 2012 Evaluation Labs. Martin Potthast, Matthias Hagen, Benno Stein, Jan Graßegger, Maximilian Michel, Martin Tippmann, and Clement Welsch. 2012b. ChatNoir: a search engine for the ClueWeb09 corpus. In Proceedings of the 35th International ACM Conference on Research and Development in Information Retrieval (SIGIR 2012), Portland, OR, USA, August 12–16, 2012, page 1004. Martin Potthast. 2011. Technologies for Reusing Text from the Web. Dissertation, Bauhaus-Universität Weimar. Stephen E. Robertson, Hugo Zaragoza, and Michael J. Taylor. 2004. Simple BM25 extension to multiple weighted fields. In Proceedings of the 13th ACM Conference on Information and Knowledge Management (CIKM 2004), Washington, DC, USA, November 8–13, 2004, pages 42–49. Fernanda B. Viégas, Martin Wattenberg, and Kushal Dave. 2004. Studying cooperation and conflict between authors with history flow visualizations. In Proceedings of the 2004 Conference on Human Factors in Computing Systems (CHI 2004), Vienna, Austria, April 24–29, 2004, pages 575–582. 1221
2013
119
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 114–124, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Decentralized Entity-Level Modeling for Coreference Resolution Greg Durrett, David Hall, and Dan Klein Computer Science Division University of California, Berkeley {gdurrett,dlwh,klein}@cs.berkeley.edu Abstract Efficiently incorporating entity-level information is a challenge for coreference resolution systems due to the difficulty of exact inference over partitions. We describe an end-to-end discriminative probabilistic model for coreference that, along with standard pairwise features, enforces structural agreement constraints between specified properties of coreferent mentions. This model can be represented as a factor graph for each document that admits efficient inference via belief propagation. We show that our method can use entity-level information to outperform a basic pairwise system. 1 Introduction The inclusion of entity-level features has been a driving force behind the development of many coreference resolution systems (Luo et al., 2004; Rahman and Ng, 2009; Haghighi and Klein, 2010; Lee et al., 2011). There is no polynomial-time dynamic program for inference in a model with arbitrary entity-level features, so systems that use such features typically rely on making decisions in a pipelined manner and sticking with them, operating greedily in a left-to-right fashion (Rahman and Ng, 2009) or in a multi-pass, sieve-like manner (Raghunathan et al., 2010). However, such systems may be locked into bad coreference decisions and are difficult to directly optimize for standard evaluation metrics. In this work, we present a new structured model of entity-level information designed to allow efficient inference. We use a log-linear model that can be expressed as a factor graph. Pairwise features appear in the model as unary factors, adjacent to nodes representing a choice of antecedent (or none) for each mention. Additional nodes model entity-level properties on a per-mention basis, and structural agreement factors softly drive properties of coreferent mentions to agree with one another. This is a key feature of our model: mentions manage their partial membership in various coreference chains, so that information about entity-level properties is decentralized and propagated across individual mentions, and we never need to explicitly instantiate entities. Exact inference in this factor graph is intractable, but efficient approximate inference can be carried out with belief propagation. Our model is the first discriminatively-trained model that both makes joint decisions over an entire document and models specific entity-level properties, rather than simply enforcing transitivity of pairwise decisions (Finkel and Manning, 2008; Song et al., 2012). We evaluate our system on the dataset from the CoNLL 2011 shared task using three different types of properties: synthetic oracle properties, entity phi features (number, gender, animacy, and NER type), and properties derived from unsupervised clusters targeting semantic type information. In all cases, our transitive model of entity properties equals or outperforms our pairwise system and our reimplementation of a previous entity-level system (Rahman and Ng, 2009). Our final system is competitive with the winner of the CoNLL 2011 shared task (Lee et al., 2011). 2 Example We begin with an example motivating our use of entity-level features. Consider the following excerpt concerning two famous auction houses: When looking for [art items], [people] go to [Sotheby’s and Christie’s] because [they]A believe [they]B can get the best price for [them]. The first three mentions are all distinct entities, theyA and theyB refer to people, and them refers to art items. The three pronouns are tricky to resolve 114 automatically because they could at first glance resolve to any of the preceding mentions. We focus in particular on the resolution of theyA and them. In order to correctly resolve theyA to people rather than Sotheby’s and Christie’s, we must take advantage of the fact that theyA appears as the subject of the verb believe, which is much more likely to be attributed to people than to auction houses. Binding principles prevent them from attaching to theyB. But how do we prevent it from choosing as its antecedent the next closest agreeing pronoun, theyA? One way is to exploit the correct coreference decision we have already made, theyA referring to people, since people are not as likely to have a price as art items are. This observation argues for enforcing agreement of entity-level semantic properties during inference, specifically properties relating to permitted semantic roles. Because even these six mentions have hundreds of potential partitions into coreference chains, we cannot search over partitions exhaustively, and therefore we must design our model to be able to use this information while still admitting an efficient inference scheme. 3 Models We will first present our BASIC model (Section 3.1) and describe the features it incorporates (Section 3.2), then explain how to extend it to use transitive features (Sections 3.3 and 3.4). Throughout this section, let x be a variable containing the words in a document along with any relevant precomputed annotation (such as parse information, semantic roles, etc.), and let n denote the number of mentions in a given document. 3.1 BASIC Model Our BASIC model is depicted in Figure 1 in standard factor graph notation. Each mention i has an associated random variable ai taking values in the set {1, . . . , i−1, <new>}; this variable specifies mention i’s selected antecedent or indicates that it begins a new coreference chain. Let a = (a1, ..., an) be the vector of the ai. Note that a set of coreference chains C (the final desired output) can be uniquely determined from a, but a is not uniquely determined by C. We use a log linear model of the conditional distribution P(a|x) as follows: P(a|x) ∝exp n X i=1 wT fA(i, ai, x) ! When looking for [art items], [people] go to [Sotheby's and Christie's] because [they]A believe [they]B can get the best price for [them]. art items 0.15 people 0.4 Sotheby’s and Christie’s 0.4 <new> 0.05 a2 a3 a4 a1 A1 A2 A3 A4 art items 0.05 <new> 0.95 antecedent choices antecedent factors } } Figure 1: Our BASIC coreference model. A decision ai is made independently for each mention about what its antecedent mention should be or whether it should start a new coreference chain. Each unary factor Ai has a log-linear form with features examining mention i, its selected antecedent ai, and the document context x. where fA(i, ai, x) is a feature function that examines the coreference decision ai for mention i with document context x; note that this feature function can include pairwise features based on mention i and the chosen antecedent ai, since information about each mention is contained in x. Because the model factors completely over the individual ai, these feature functions fA can be expressed as unary factors Ai (see Figure 1), with Ai(j) ∝exp wT fA(i, j, x)  . Given a setting of w, we can determine ˆa = arg maxa P(a|x) and then deterministically compute C(a), the final set of coreference chains. While the features of this model factor over coreference links, this approach differs from classical pairwise systems such as Bengtson and Roth (2008) or Stoyanov et al. (2010). Because potential antecedents compete with each other and with the non-anaphoric hypothesis, the choice of ai actually represents a joint decision about i−1 pairwise links, as opposed to systems that use a pairwise binary classifier and a separate agglomeration step, which consider one link at a time during learning. This approach is similar to the mentionranking model of Rahman and Ng (2009). 3.2 Pairwise Features We now present the set of features fA used by our unary factors Ai. Each feature examines the an115 tecedent choice ai of the current mention as well as the observed information x in the document. For each of the features we present, two conjoined versions are included: one with an indicator of the type of the current mention being resolved, and one with an indicator of the types of the current and antecedent mentions. Mention types are either NOMINAL, PROPER, or, if the mention is pronominal, a canonicalized version of the pronoun abstracting away case.1 Several features, especially those based on the precise constructs (apposition, etc.) and those incorporating phi feature information, are computed using the machinery in Lee et al. (2011). Other features were inspired by Song et al. (2012) and Rahman and Ng (2009). Anaphoricity features: Indicator of anaphoricity, indicator on definiteness. Configurational features: Indicator on distance in mentions (capped at 10), indicator on distance in sentences (capped at 10), does the antecedent c-command the current mention, are the two mentions in a subject/object construction, are the mentions nested, are the mentions in deterministic appositive/role appositive/predicate nominative/relative pronoun constructions. Match features: Is one mention an acronym of the other, head match, head contained (each way), string match, string contained (each way), relaxed head match features from Lee et al. (2011). Agreement features: Gender, number, animacy, and NER type of the current mention and the antecedent (separately and conjoined). Discourse features: Speaker match conjoined with an indicator of whether the document is an article or conversation. Because we use conjunctions of these base features together with the antecedent and mention type, our system can capture many relationships that previous systems hand-coded, especially regarding pronouns. For example, our system has access to features such as “it is non-anaphoric”, “it has as its antecedent a geopolitical entity”, or “I has as its antecedent I with the same speaker.” 1While this canonicalization could theoretically impair our ability to resolve, for example, reflexive pronouns, conjoining features with raw pronoun strings does not improve performance. We experimented with synonymy and hypernymy features from WordNet (Miller, 1995), but these did not empirically improve performance. 3.3 TRANSITIVE Model The BASIC model can capture many relationships between pairs of mentions, but cannot necessarily capture entity-level properties like those discussed in Section 2. We could of course model entities directly (Luo et al., 2004; Rahman and Ng, 2009), saying that each mention refers to some prior entity rather than to some prior mention. However, inference in this model would require reasoning about all possible partitions of mentions, which is computationally infeasible without resorting to severe approximations like a left-to-right inference method (Rahman and Ng, 2009). Instead, we would like to try to preserve the tractability of the BASIC model while still being able to exploit entity-level information. To do so, we will allow each mention to maintain its own distributions over values for a number of properties; these properties could include gender, namedentity type, or semantic class. Then, we will require each anaphoric mention to agree with its antecedent on the value of each of these properties. Our TRANSITIVE model which implements this scheme is shown in Figure 2. Each mention i has been augmented with a single property node pi ∈{1, ..., k}. The unary Pi factors encode prior knowledge about the setting of each pi; these factors may be hard (I will not refer to a plural entity), soft (such as a distribution over named entity types output by an NER tagger), or practically uniform (e.g. the last name Smith does not specify a particular gender). To enforce agreement of a particular property, we require a mention to have the same property value as its antecedent. That is, for mentions i and j, if ai = j, we want to ensure that pi and pj agree. We can achieve this with the following set of structural equality factors: Ei−j(ai, pi, pj) = 1 −I[ai = j ∧pi ̸= pj] In words, this factor is zero if both ai = j and pi disagrees with pj. These equality factors essentially provide a mechanism by which these priors Pi can influence the coreference decisions: if, for example, the factors Pi and Pj disagree very strongly, choosing ai ̸= j will be preferred in order to avoid forcing one of pi or pj to take an undesirable value. Moreover, note that although ai 116 E4-3 a2 a4 p4 p3 p2 E4-2 A2 A3 A4 P2 P3 P4 antecedent choices antecedent factors property factors properties equality factors a3 } } } } } people Sotheby's and Christie's they Figure 2: The factor graph for our TRANSITIVE coreference model. Each node ai now has a property pi, which is informed by its own unary factor Pi. In our example, a4 strongly indicates that mentions 2 and 4 are coreferent; the factor E4−2 then enforces equality between p2 and p4, while the factor E4−3 has no effect. only indicates a single antecedent, the transitive nature of the E factors forces pi to agree with the p nodes of all other mentions likely to be in the same entity. 3.4 Property Projection So far, our model as specified ensures agreement of our entity-level properties, but strictly enforcing agreement may not always be correct. Suppose that we are using named entity type as an entitylevel property. Organizations and geo-political entities are two frequently confused and ambiguous tags, and in the gold-standard coreference chains it may be the case that a single chain contains instances of both. We might wish to learn that organizations and geo-political entities are “compatible” in the sense that we should forgive entities for containing both, but without losing the ability to reject a chain containing both organizations and people, for example. To address these effects, we expand our model as indicated in Figure 3. As before, we have a set of properties pi and agreement factors Eij. On top of that, we introduce the notion of raw property values ri ∈{1, ..., k} together with priors in the form of the Ri factors. The ri and pi could in principle have different domains, but for this work we take them to have the same domain. The Pi factors now have a new structure: they now represent a featurized projection of the ri onto the pi, which can now be thought of as “coreferencep4 p3 p2 r4 r3 r2 P2 P3 P4 R2 R3 R4 raw property factors raw properties projection factors projected properties } } } } a2 a4 A2 A3 A4 a3 E3-1 E4-1 Figure 3: The complete factor graph for our TRANSITIVE coreference model. Compared to Figure 2, the Ri contain the raw cluster posteriors, and the Pi factors now project raw cluster values ri into a set of “coreference-adapted” clusters pi that are used as before. This projection allows mentions with different but compatible raw property values to coexist in the same coreference chain. adapted” properties. The Pi factors are defined by Pi(pi, ri) ∝exp(wT fP (pi, ri)), where fP is a feature vector over the projection of ri onto pi. While there are many possible choices of fP , we choose it to be an indicator of the values of pi and ri, so that we learn a fully-parameterized projection matrix.2 The Ri are constant factors, and may come from an upstream model or some other source depending on the property being modeled. Our description thus far has assumed that we are modeling only one type of property. In fact, we can use multiple properties for each mention by duplicating the r and p nodes and the R, P, and E factors across each desired property. We index each of these by l ∈{1, . . . , m} for each of m properties. The final log-linear model is given by the following formula: P(a|x) ∝ X p,r    Y i,j,l El,i−j(ai, pli, plj)    Y i,l Rli(rli)   exp wT X i fA(i, ai, x) + X l fP (pli, rli) !!# where i and j range over mentions, l ranges over 2Initialized to zero (or small values), this matrix actually causes the transitive machinery to have no effect, since all posteriors over the pi are flat and completely uninformative. Therefore, we regularize the weights of the indicators of pi = ri towards 1 and all other features towards 0 to give each raw cluster a preference for a distinct projected cluster. 117 each of m properties, and the outer sum indicates marginalization over all p and r variables. 4 Learning Now that we have defined our model, we must decide how to train its weights w. The first issue to address is one of the supervision provided. Our model traffics in sets of labels a which are more specified than gold coreference chains C, which give cluster membership for each mention but not antecedence. Let A(C) be the set of labelings a that are consistent with a set of coreference chains C. For example, if C = {{1, 2, 3}, {4}}, then (<new>, 1, 2, <new>) ∈ A(C) and (<new>, 1, 1, <new>) ∈A(C) but (<new>, 1, <new>, 3) /∈A(C), since this implies the chains C = {{1, 2}, {3, 4}} The most natural objective is a variant of standard conditional log-likelihood that treats the choice of a for the specified C as a latent variable to be marginalized out: ℓ(w) = t X i=1 log   X a∈A(Ci) P(a|xi)   (1) where (xi, Ci) is the ith labeled training example. This optimizes for the 0-1 loss; however, we are much more interested in optimizing with respect to a coreference-specific loss function. To this end, we will use softmax-margin (Gimpel and Smith, 2010), which augments the probability of each example with a term proportional to its loss, pushing the model to assign less mass to highly incorrect examples. We modify Equation 1 to use a new probability distribution P ′ instead of P, where P ′(a|xi) ∝P(a|xi) exp (l(a, C)) and l(a, C) is a loss function. In order to perform inference efficiently, l(a, C) must decompose linearly across mentions: l(a, C) = Pn i=1 l(ai, C). Commonly-used coreference metrics such as MUC (Vilain et al., 1995) and B3 (Bagga and Baldwin, 1998) do not have this property, so we instead make use of a parameterized loss function that does and fit the parameters to give good performance. Specifically, we take l(a, C) = n X i=1 [c1I(K1(ai, C)) + c2I(K2(ai, C)) + c3I(K3(ai, C))] where c1, c2, and c3 are real-valued weights, K1 denotes the event that ai is falsely anaphoric when it should be non-anaphoric, K2 denotes the event that ai is falsely non-anaphoric when it should be anaphoric, and K3 denotes the event that ai is correctly determined to be anaphoric but . These can be computed based on only ai and C. By setting c1 low and c2 high relative to c3, we can force the system to be less conservative about making anaphoricity decisions and achieve a better balance with the final coreference metrics. Finally, we incorporate L1 regularization, giving us our final objective: ℓ(w) = t X i=1 log   X a∈A(Ci) P ′(a|xi)  + λ∥w∥1 We optimize this objective using AdaGrad (Duchi et al., 2011); we found this to be faster and give higher performance than L-BFGS using L2 regularization (Liu and Nocedal, 1989). Note that because of the marginalization over A(Ci), even the objective for the BASIC model is not convex. 5 Inference Inference in the BASIC model is straightforward. Given a set of weights w, we can predict ˆa = arg max a P(a|x) We then report the corresponding chains C(a) as the system output.3 For learning, the gradient takes the standard form of the gradient of a log-linear model, a difference of expected feature counts under the gold annotation and under no annotation. This requires computing marginals P ′(ai|x) for each mention i, but because the model already factors this way, this step is easy. The TRANSITIVE model is more complex. Exact inference is intractable due to the E factors that couple all of the ai by way of the pi nodes. However, we can compute approximate marginals for the ai, pi, and ri using belief propagation. BP has been effectively used on other NLP tasks (Smith and Eisner, 2008; Burkett and Klein, 2012), and is effective in cases such as this where the model is largely driven by non-loopy factors (here, the Ai). From marginals over each node, we can compute the necessary gradient and decode as before: ˆa = arg max a ˆP(a|x) 3One could use ILP-based decoding in the style of Finkel and Manning (2008) and Song et al. (2012) to attempt to explicitly find the optimal C with choice of a marginalized out, but we did not explore this option. 118 This corresponds to minimum-risk decoding with respect to the Hamming loss over antecedence predictions. Pruning. The TRANSITIVE model requires instantiating a factor for each potential setting of each ai. This factor graph grows quadratically in the size of the document, and even approximate inference becomes slow when a document contains over 200 mentions. Therefore, we use our BASIC model to prune antecedent choices for each ai in order to reduce the size of the factor graph that we must instantiate. Specifically, we prune links between pairs of mentions that are of mention distance more than 100, as well as values for ai that fall below a particular odds ratio threshold with respect to the best setting of that ai in the BASIC model; that is, those for which log  PBASIC (ai|x) maxj PBASIC (ai = j|x)  is below a cutoff γ. 6 Related Work Our BASIC model is a mention-ranking approach resembling models used by Denis and Baldridge (2008) and Rahman and Ng (2009), though it is trained using a novel parameterized loss function. It is also similar to the MLN-JOINT(BF) model of Song et al. (2012), but we enforce the singleparent constraint at a deeper structural level, allowing us to treat non-anaphoricity symmetrically with coreference as in Denis and Baldridge (2007) and Stoyanov and Eisner (2012). The model of Fernandes et al. (2012) also uses the single-parent constraint structurally, but with learning via latent perceptron and ILP-based one-best decoding rather than logistic regression and BP-based marginal computation. Our TRANSITIVE model is novel; while McCallum and Wellner (2004) proposed the idea of using attributes for mentions, they do not actually implement a model that does so. Other systems include entity-level information via handwritten rules (Raghunathan et al., 2010), induced rules (Yang et al., 2008), or features with learned weights (Luo et al., 2004; Rahman and Ng, 2011), but all of these systems freeze past coreference decisions in order to compute their entities. Most similar to our entity-level approach is the system of Haghighi and Klein (2010), which also uses approximate global inference; however, theirs is an unsupervised, generative system and they attempt to directly model multinomials over words in each mention. Their system could be extended to handle property information like we do, but our system has many other advantages, such as freedom from a pre-specified list of entity types, the ability to use multiple input clusterings, and discriminative projection of clusters. 7 Experiments We use the datasets, experimental setup, and scoring program from the CoNLL 2011 shared task (Pradhan et al., 2011), based on the OntoNotes corpus (Hovy et al., 2006). We use the standard automatic parses and NER tags for each document. Our mentions are those output by the system of Lee et al. (2011); we also use their postprocessing to remove appositives, predicate nominatives, and singletons before evaluation. For each experiment, we report MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), and CEAFe (Luo, 2005), as well as their average. Parameter settings. We take the regularization constant λ = 0.001 and the parameters of our surrogate loss (c1, c2, c3) = (0.15, 2.5, 1) for all models.4 All models are trained for 20 iterations. We take the pruning threshold γ = −2. 7.1 Systems Besides our BASIC and TRANSITIVE systems, we evaluate a strictly pairwise system that incorporates property information by way of indicator features on the current mention’s most likely property value and the proposed antecedent’s most likely property value. We call this system PAIRPROPERTY; it is simply the BASIC system with an expanded feature set. Furthermore, we compare against a LEFTTORIGHT entity-level system like that of Rahman and Ng (2009).5 Decoding now operates in a sequential fashion, with BASIC features computed as before and entity features computed for each mention based on the coreference decisions made thus far. Following Rahman and Ng (2009), features for each property indicate whether the cur4Additional tuning of these hyper parameters did not significantly improve any of the models under any of the experimental conditions. 5Unfortunately, their publicly-available system is closedsource and performs poorly on the CoNLL shared task dataset, so direct comparison is difficult. 119 rent mention agrees with no mentions in the antecedent cluster, at least one mention, over half of the mentions, or all of the mentions; antecedent clusters of size 1 or 2 fire special-cased features. These additional features beyond those in Rahman and Ng (2009) were helpful, but more involved conjunction schemes and fine-grained features were not. During training, entity features of both the gold and the prediction are computed using the Viterbi clustering of preceding mentions under the current model parameters.6 All systems are run in a two-pass manner: first, the BASIC model is run, then antecedent choices are pruned, then our second-round model is trained from scratch on the pruned data.7 7.2 Noisy Oracle Features We first evaluate our model’s ability to exploit synthetic entity-level properties. For this experiment, mention properties are derived from corrupted oracle information about the true underlying coreference cluster. Each coreference cluster is assumed to have one underlying value for each of m coreference properties, each taking values over a domain D. Mentions then sample distributions over D from a Dirichlet distribution peaked around the true underlying value.8 These posteriors are taken as the Ri for the TRANSITIVE model. We choose this setup to reflect two important properties of entity-level information: first, that it may come from a variety of disparate sources, and second, that it may be based on the determinations of upstream models which produce posteriors naturally. A strength of our model is that it can accept such posteriors as input, naturally making use of this information in a model-based way. Table 1 shows development results averaged across ten train-test splits with m = 3 properties, each taking one of |D| = 5 values. We emphasize that these parameter settings give fairly weak oracle information: a document may have hundreds of clusters, so even in the absence of noise these oracle properties do not have high dis6Using gold entities for training as in Rahman and Ng (2009) resulted in a lower-performing system. 7We even do this for the BASIC model, since we found that performance of the pruned and retrained model was generally higher. 8Specifically, the distribution used is a Dirichlet with α = 3.5 for the true underlying cluster and α = 1 for other values, chosen so that 25% of samples from the distribution did not have the correct mode. Though these parameters affect the quality of the oracle information, varying them did not change the relative performance of the different models. NOISY ORACLE MUC B3 CEAFe Avg. BASIC 61.96 70.66 47.30 59.97 PAIRPROPERTY 66.31 72.68 49.08 62.69 LEFTTORIGHT 66.49 73.14 49.46 63.03 TRANSITIVE 67.37 74.05 49.68 63.70 Table 1: CoNLL metric scores for our four different systems incorporating noisy oracle data. This information helps substantially in all cases. Both entity-level models outperform the PAIRPROPERTY model, but we observe that the TRANSITIVE model is more effective than the LEFTTORIGHT model at using this information. criminating power. Still, we see that all models are able to benefit from incorporating this information; however, our TRANSITIVE model outperforms both the PAIRPROPERTY model and the LEFTTORIGHT model. There are a few reasons for this: first, our model is able to directly use soft posteriors, so it is able to exploit the fact that more peaked samples from the Dirichlet are more likely to be correct. Moreover, our model can propagate information backwards in a document as well as forwards, so the effects of noise can be more easily mitigated. By contrast, in the LEFTTORIGHT model, if the first or second mention in a cluster has the wrong property value, features indicating high levels of property agreement will not fire on the next few mentions in those clusters. 7.3 Phi Features As we have seen, our TRANSITIVE model can exploit high-quality entity-level features. How does it perform using real features that have been proposed for entity-level coreference? Here, we use hard phi feature determinations extracted from the system of Lee et al. (2011). Named-entity type and animacy are both computed based on the output of a named-entity tagger, while number and gender use the dataset of Bergsma and Lin (2006). Once this information is determined, the PAIRPROPERTY and LEFTTORIGHT systems can compute features over it directly. In the TRANSITIVE model, each of the Ri factors places 3 4 of its mass on the determined label and distributes the remainder uniformly among the possible options. Table 2 shows results when adding entity-level phi features on top of our BASIC pairwise system (which already contains pairwise features) and on top of an ablated BASIC system without pairwise 120 PHI FEATURES MUC B3 CEAFe Avg. BASIC 61.96 70.66 47.30 59.97 LEFTTORIGHT 61.34 70.41 47.64 59.80 TRANSITIVE 62.66 70.92 46.88 60.16 PHI FEATURES (ABLATED BASIC) BASIC-PHI 59.45 69.21 46.02 58.23 PAIRPROPERTY 61.88 70.66 47.14 59.90 LEFTTORIGHT 61.42 70.53 47.49 59.81 TRANSITIVE 62.23 70.78 46.74 59.92 Table 2: CoNLL metric scores for our systems incorporating phi features. Our standard BASIC system already includes phi features, so no results are reported for PAIRPROPERTY. Here, our TRANSITIVE system does not give substantial improvement on the averaged metric. Over a baseline which does not include phi features, all systems are able to incorporate them comparably. phi features. Our entity-level systems successfully captures phi features when they are not present in the baseline, but there is only slight benefit over pairwise incorporation, a result which has been noted previously (Luo et al., 2004). 7.4 Clustering Features Finally, we consider mention properties derived from unsupervised clusterings; these properties are designed to target semantic properties of nominals that should behave more like the oracle features than the phi features do. We consider clusterings that take as input pairs (n, r) of a noun head n and a string r which contains the semantic role of n (or some approximation thereof) conjoined with its governor. Two different algorithms are used to cluster these pairs: a NAIVEBAYES model, where c generates n and r, and a CONDITIONAL model, where c is generated conditioned on r and then n is generated from c. Parameters for each can be learned with the expectation maximization (EM) algorithm (Dempster et al., 1977), with symmetry broken by a small amount of random noise at initialization. Similar models have been used to learn subcategorization information (Rooth et al., 1999) or properties of verb argument slots (Yao et al., 2011). We choose this kind of clustering for its relative simplicity and because it allows pronouns to have more informed properties (from their verbal context) than would be possible using a model that makes type-level decisions about nominals only. Though these specific cluster features are novel to coreference, previous work has used similar CLUSTERS MUC B3 CEAFe Avg. BASIC 61.96 70.66 47.30 59.97 PAIRPROPERTY 62.88 70.71 47.45 60.35 LEFTTORIGHT 61.98 70.19 45.77 59.31 TRANSITIVE 63.34 70.89 46.88 60.37 Table 3: CoNLL metric scores for our systems incorporating clustering features. These features are equally effectively incorporated by our PAIRPROPERTY system and our TRANSITIVE system. government officials court authorities ARG0:said ARG0:say ARG0:found ARG0:announced prices shares index rates ARG1:rose ARG1:fell ARG1:cut ARG1:closed way law agreement plan ARG1:signed ARG1:announced ARG1:set ARG1:approved attack problems attacks charges ARG1:cause ARG2:following ARG1:reported ARG1:filed ... ... ... ... ... ... ... ... ... Figure 4: Examples of clusters produced by the NAIVEBAYES model on SRL-tagged data with pronouns discarded. types of fine-grained semantic class information (Hendrickx and Daelemans, 2007; Ng, 2007; Rahman and Ng, 2010). Other approaches incorporate information from other sources (Ponzetto and Strube, 2006) or compute heuristic scores for realvalued features based on a large corpus or the web (Dagan and Itai, 1990; Yang et al., 2005; Bansal and Klein, 2012). We use four different clusterings in our experiments, each with twenty clusters: dependency-parse-derived NAIVEBAYES clusters, semantic-role-derived CONDITIONAL clusters, SRL-derived NAIVEBAYES clusters generating a NOVERB token when r cannot be determined, and SRL-derived NAIVEBAYES clusters with all pronoun tuples discarded. Examples of the latter clusters are shown in Figure 4. Each clustering is learned for 30 iterations of EM over English Gigaword (Graff et al., 2007), parsed with the Berkeley Parser (Petrov et al., 2006) and with SRL determined by Senna (Collobert et al., 2011). Table 3 shows results of modeling these cluster properties. As in the case of oracle features, the PAIRPROPERTY and LEFTTORIGHT systems use the modes of the cluster posteriors, and the TRANSITIVE system uses the posteriors directly as the Ri. We see comparable performance from incorporating features in both an entity-level framework and a pairwise framework, though the TRANSI121 MUC B3 CEAFe Avg. Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 F1 BASIC 69.99 55.59 61.96 80.96 62.69 70.66 41.37 55.21 47.30 59.97 STANFORD 61.49 59.59 60.49 74.60 68.25 71.28 47.57 49.45 48.49 60.10 NOISY ORACLE PAIRPROPERTY 76.49 58.53 66.31 84.98 63.48 72.68 41.84 59.36 49.08 62.69 LEFTTORIGHT 76.92 58.55 66.49 85.68 63.81 73.14 42.07 60.01 49.46 63.03 TRANSITIVE 76.48 60.20 *67.37 84.84 65.69 *74.05 42.89 59.01 *49.68 63.70 PHI FEATURES LEFTTORIGHT 69.77 54.73 61.34 81.40 62.04 70.41 41.49 55.92 47.64 59.80 TRANSITIVE 70.27 56.54 *62.66 79.81 63.82 *70.92 41.17 54.44 46.88 60.16 PHI FEATURES (ABLATED BASIC) BASIC-PHI 67.04 53.41 59.45 78.93 61.63 69.21 40.40 53.46 46.02 58.23 PAIRPROPERTY 70.24 55.31 61.88 81.10 62.60 70.66 41.04 55.38 47.14 59.90 LEFTTORIGHT 69.94 54.75 61.42 81.38 62.23 70.53 41.29 55.87 47.49 59.81 TRANSITIVE 70.06 55.98 *62.23 79.92 63.52 70.78 40.90 54.52 46.74 59.92 CLUSTERS PAIRPROPERTY 71.77 55.95 62.88 81.76 62.30 70.71 40.98 56.35 47.45 60.35 LEFTTORIGHT 69.75 54.82 61.39 81.48 62.29 70.60 41.62 55.89 47.71 59.90 TRANSITIVE 71.54 56.83 *63.34 80.55 63.31 *70.89 40.77 55.14 46.88 60.37 Table 4: CoNLL metric scores averaged across ten different splits of the training set for each experiment. We include precision, recall, and F1 for each metric for completeness. Starred F1 values on the individual metrics for the TRANSITIVE system are significantly better than all other results in the same block at the p = 0.01 level according to a bootstrap resampling test. MUC B3 CEAFe Avg. Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 F1 BASIC 68.84 56.08 61.81 77.60 61.40 68.56 38.25 50.57 43.55 57.97 PAIRPROPERTY 70.90 56.26 62.73 78.95 60.79 68.69 37.69 51.92 43.67 58.37 LEFTTORIGHT 68.84 55.56 61.49 78.64 61.03 68.72 38.97 51.74 44.46 58.22 TRANSITIVE 70.62 58.06 *63.73 76.93 62.24 68.81 38.00 50.40 43.33 58.62 STANFORD 60.91 62.13 61.51 70.61 67.75 69.15 45.79 44.55 45.16 58.61 Table 5: CoNLL metric scores for our best systems (including clustering features) on the CoNLL blind test set, reported in the same manner as Table 4. TIVE system appears to be more effective than the LEFTTORIGHT system. 7.5 Final Results Table 4 shows expanded results on our development sets for the different types of entity-level information we considered. We also show in in Table 5 the results of our system on the CoNLL test set, and see that it performs comparably to the Stanford coreference system (Lee et al., 2011). Here, our TRANSITIVE system provides modest improvements over all our other systems. Based on Table 4, our TRANSITIVE system appears to do better on MUC and B3 than on CEAFe. However, we found no simple way to change the relative performance characteristics of our various systems; notably, modifying the parameters of the loss function mentioned in Section 4 or changing it entirely did not trade off these three metrics but merely increased or decreased them in lockstep. Therefore, the TRANSITIVE system actually substantially improves over our baselines and is not merely trading off metrics in a way that could be easily reproduced through other means. 8 Conclusion In this work, we presented a novel coreference architecture that can both take advantage of standard pairwise features as well as use transitivity to enforce coherence of decentralized entity-level properties within coreference clusters. Our transitive system is more effective at using properties than a pairwise system and a previous entity-level system, and it achieves performance comparable to that of the Stanford coreference resolution system, the winner of the CoNLL 2011 shared task. Acknowledgments This work was partially supported by BBN under DARPA contract HR0011-12-C-0014, by an NSF fellowship for the first author, and by a Google fellowship for the second. Thanks to the anonymous reviewers for their insightful comments. 122 References Amit Bagga and Breck Baldwin. 1998. Algorithms for Scoring Coreference Chains. In Proceedings of the Conference on Language Resources and Evaluation Workshop on Linguistics Coreference. Mohit Bansal and Dan Klein. 2012. Coreference Semantics from Web Features. In Proceedings of the Association for Computational Linguistics. Eric Bengtson and Dan Roth. 2008. Understanding the Value of Features for Coreference Resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Shane Bergsma and Dekang Lin. 2006. Bootstrapping Path-Based Pronoun Resolution. In Proceedings of the Conference on Computational Linguistics and the Association for Computational Linguistics. David Burkett and Dan Klein. 2012. Fast Inference in Phrase Extraction Models with Belief Propagation. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493–2537, November. Ido Dagan and Alon Itai. 1990. Automatic Processing of Large Corpora for the Resolution of Anaphora References. In Proceedings of the Conference on Computational Linguistics - Volume 3. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Pascal Denis and Jason Baldridge. 2007. Joint Determination of Anaphoricity and Coreference Resolution using Integer Programming. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Pascal Denis and Jason Baldridge. 2008. Specialized Models and Ranking for Coreference Resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121–2159, July. Eraldo Rezende Fernandes, C´ıcero Nogueira dos Santos, and Ruy Luiz Milidi´u. 2012. Latent Structure Perceptron with Feature Induction for Unrestricted Coreference Resolution. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Proceedings and Conference on Computational Natural Language Learning - Shared Task. Jenny Rose Finkel and Christopher D. Manning. 2008. Enforcing Transitivity in Coreference Resolution. In Proceedings of the Association for Computational Linguistics: Short Papers. Kevin Gimpel and Noah A. Smith. 2010. SoftmaxMargin CRFs: Training Log-Linear Models with Cost Functions. In Proceedings of the North American Chapter for the Association for Computational Linguistics. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2007. English Gigaword Third Edition. Linguistic Data Consortium, Catalog Number LDC2007T07. Aria Haghighi and Dan Klein. 2010. Coreference Resolution in a Modular, Entity-Centered Model. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Iris Hendrickx and Walter Daelemans, 2007. Adding Semantic Information: Unsupervised Clusters for Coreference Resolution. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: the 90% solution. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Short Papers. Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford’s Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task. In Proceedings of the Conference on Computational Natural Language Learning: Shared Task. Dong C. Liu and Jorge Nocedal. 1989. On the Limited Memory BFGS Method for Large Scale Optimization. Mathematical Programming, 45(3):503–528, December. Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A Mention-Synchronous Coreference Resolution Algorithm Based on the Bell Tree. In Proceedings of the Association for Computational Linguistics. Xiaoqiang Luo. 2005. On Coreference Resolution Performance Metrics. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Andrew McCallum and Ben Wellner. 2004. Conditional Models of Identity Uncertainty with Application to Noun Coreference. In Proceedings of Advances in Neural Information Processing Systems. George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM, 38:39–41. Vincent Ng. 2007. Semantic class induction and coreference resolution. In Proceedings of the Association for Computational Linguistics. 123 Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of the Conference on Computational Linguistics and the Association for Computational Linguistics. Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting Semantic Role Labeling, WordNet and Wikipedia for Coreference Resolution. In Proceedings of the North American Chapter of the Association of Computational Linguistics. Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 Shared Task: Modeling Unrestricted Coreference in OntoNotes. In Proceedings of the Conference on Computational Natural Language Learning: Shared Task. Karthik Raghunathan, Heeyoung Lee, Sudarshan Rangarajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A MultiPass Sieve for Coreference Resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Altaf Rahman and Vincent Ng. 2009. Supervised Models for Coreference Resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Altaf Rahman and Vincent Ng. 2010. Inducing FineGrained Semantic Classes via Hierarchical and Collective Classification. In Proceedings of the International Conference on Computational Linguistics. Altaf Rahman and Vincent Ng. 2011. Narrowing the Modeling Gap: A Cluster-Ranking Approach to Coreference Resolution. Journal of Artificial Intelligence Research, 40(1):469–521, January. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a Semantically Annotated Lexicon via EM-Based Clustering. In Proceedings of the Association for Computational Linguistics. David A. Smith and Jason Eisner. 2008. Dependency Parsing by Belief Propagation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Yang Song, Jing Jiang, Wayne Xin Zhao, Sujian Li, and Houfeng Wang. 2012. Joint Learning for Coreference Resolution with Markov Logic. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Veselin Stoyanov and Jason Eisner. 2012. Easy-first Coreference Resolution. In Proceedings of the International Conference on Computational Linguistics. Veselin Stoyanov, Claire Cardie, Nathan Gilbert, Ellen Riloff, David Buttler, and David Hysom. 2010. Coreference Resolution with Reconcile. In Proceedings of the Association for Computational Linguistics: Short Papers. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A ModelTheoretic Coreference Scoring Scheme. In Proceedings of the Conference on Message Understanding. Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2005. Improving Pronoun Resolution Using Statistics-Based Semantic Compatibility Information. In Proceedings of the Association for Computational Linguistics. Xiaofeng Yang, Jian Su, Jun Lang, Chew L. Tan, Ting Liu, and Sheng Li. 2008. An Entity-Mention Model for Coreference Resolution with Inductive Logic Programming. In Proceedings of the Association for Computational Linguistics. Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured Relation Discovery Using Generative Models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 124
2013
12
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1222–1232, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics SPred: Large-scale Harvesting of Semantic Predicates Tiziano Flati and Roberto Navigli Dipartimento di Informatica Sapienza Universit`a di Roma {flati,navigli}@di.uniroma1.it Abstract We present SPred, a novel method for the creation of large repositories of semantic predicates. We start from existing collocations to form lexical predicates (e.g., break ∗) and learn the semantic classes that best fit the ∗argument. To do this, we extract all the occurrences in Wikipedia which match the predicate and abstract its arguments to general semantic classes (e.g., break BODY PART, break AGREEMENT, etc.). Our experiments show that we are able to create a large collection of semantic predicates from the Oxford Advanced Learner’s Dictionary with high precision and recall, and perform well against the most similar approach. 1 Introduction Acquiring semantic knowledge from text automatically is a long-standing issue in Computational Linguistics and Artificial Intelligence. Over the last decade or so the enormous abundance of information and data that has become available has made it possible to extract huge amounts of patterns and named entities (Etzioni et al., 2005), semantic lexicons for categories of interest (Thelen and Riloff, 2002; Igo and Riloff, 2009), large domain glossaries (De Benedictis et al., 2013) and lists of concepts (Katz et al., 2003). Recently, the availability of Wikipedia and other collaborative resources has considerably boosted research on several aspects of knowledge acquisition (Hovy et al., 2013), leading to the creation of several large-scale knowledge resources, such as DBPedia (Bizer et al., 2009), BabelNet (Navigli and Ponzetto, 2012), YAGO (Hoffart et al., 2013), MENTA (de Melo and Weikum, 2010), to name but a few. This wealth of acquired knowledge is known to have a positive impact on important fields such as Information Retrieval (Chu-Carroll and Prager, 2007), Information Extraction (Krause et al., 2012), Question Answering (Ferrucci et al., 2010) and Textual Entailment (Berant et al., 2012; Stern and Dagan, 2012). Not only are these knowledge resources obtained by acquiring concepts and named entities, but they also provide semantic relations between them. These relations are extracted from unstructured or semi-structured text using ontology learning from scratch (Velardi et al., 2013) and Open Information Extraction techniques (Etzioni et al., 2005; Yates et al., 2007; Wu and Weld, 2010; Fader et al., 2011; Moro and Navigli, 2013) which mainly stem from seminal work on is-a relation acquisition (Hearst, 1992) and subsequent developments (Girju et al., 2003; Pasca, 2004; Snow et al., 2004, among others). However, these knowledge resources still lack semantic information about language units such as phrases and collocations. For instance, which semantic classes are expected as a direct object of the verb break? What kinds of noun does the adjective amazing collocate with? Recognition of the need for systems that are aware of the selectional restrictions of verbs and, more in general, of textual expressions, dates back to several decades (Wilks, 1975), but today it is more relevant than ever, as is testified by the current interest in semantic class learning (Kozareva et al., 2008) and supertype acquisition (Kozareva and Hovy, 2010). These approaches leverage lexico-syntactic patterns and input seeds to recursively learn the semantic classes of relation arguments. However, they require the manual selection of one or more seeds for each pattern of interest, and this selection influences the amount and kind of semantic classes to be learned. Furthermore, the learned classes are not directly linked to existing resources such as WordNet (Fellbaum, 1998) or Wikipedia. The goal of our research is to create a largescale repository of semantic predicates whose lexical arguments are replaced by their semantic classes. For example, given the textual expression break a toe we want to create the correspond1222 ing semantic predicate break a BODY PART, where BODY PART is a class comprising several lexical realizations, such as leg, arm, foot, etc. This paper provides three main contributions: • We propose SPred, a novel approach which harvests predicates from Wikipedia and generalizes them by leveraging core concepts from WordNet. • We create a large-scale resource made up of semantic predicates. • We demonstrate the high quality of our semantic predicates, as well as the generality of our approach, also in comparison with our closest competitor. 2 Preliminaries We introduce two preliminary definitions which we use in our approach. Definition 1 (lexical predicate). A lexical predicate w1 w2 . . . wi ∗wi+1 . . . wn is a regular expression, where wj are tokens (j = 1, . . . , n), ∗ matches any sequence of one or more tokens, and i ∈{0, . . . , n}. We call the token sequence which matches ∗the filling argument of the predicate. For example, a * of milk matches occurrences such as a full bottle of milk, a glass of milk, a carton of milk, etc. While in principle * could match any sequence of words, since we aim at generalizing nouns, in what follows we allow ∗to match only noun phrases (e.g., glass, hot cup, very big bottle, etc.). Definition 2 (semantic predicate). A semantic predicate is a sequence w1 w2 . . . wi c wi+1 . . . wn, where wj are tokens (j = 1, . . . , n), c ∈C is a semantic class selected from a fixed set C of classes, and i ∈{0, . . . , n}. As an example, consider the semantic predicate cup of BEVERAGE,1 where BEVERAGE is a semantic class representing beverages. This predicate matches phrases like cup of coffee, cup of tea, etc., but not cup of sky. Other examples include: MUSICAL INSTRUMENT is played by, a CONTAINER of milk, break AGREEMENT, etc. Semantic predicates mix the lexical information of a given lexical predicate with the explicit semantic modeling of its argument. Importantly, the same lexical predicate can have different classes as its argument, like cup of FOOD vs. cup of BEVERAGE. Note, however, that different classes might convey different semantics for the same lexical 1In what follows we denote the SEMANTIC CLASS in small capitals and the lexical predicate in italics. predicate, such as cup of COUNTRY, referring to cup as a prize instead of cup as a container. 3 Large-Scale Harvesting of Semantic Predicates The goal of this paper is to provide a fully automatic approach for the creation of a large repository of semantic predicates in three phases. For each lexical predicate of interest (e.g., break ∗): 1. We extract all its possible filling arguments from Wikipedia, e.g., lease, contract, leg, arm, etc. (Section 3.1). 2. We disambiguate as many filling arguments as possible using Wikipedia, obtaining a set of corresponding Wikipedia pages, e.g., Lease, Contract, etc. (Section 3.2). 3. We create the semantic predicates by generalizing the Wikipedia pages to their most suitable semantic classes, e.g., break AGREEMENT, break LIMB, etc. (Section 3.3). We can then exploit the learned semantic predicates to assign the most suitable semantic class to new filling arguments for the given lexical predicate (Section 3.4). 3.1 Extraction of Filling Arguments Let π be an input lexical predicate (e.g., break ∗). We search the English Wikipedia for all the token sequences which match π, resulting in a list of noun phrases filling the ∗argument. We show an excerpt of the output obtained when searching Wikipedia for the arguments of the lexical predicate a * of milk in Table 1. As can be seen, a wide range of noun phrases are extracted, from quantities such as glass and cup to other aspects, such as brand and constituent. The output of this first step is a set Lπ of triples (a, s, l) of filling arguments a matching the lexical predicate π in a sentence s of the Wikipedia corpus, with a potentially linked to a page l (e.g., see the top 3 rows in Table 1; l = ϵ if no link is provided, see bottom rows of the Table).2 Note that Wikipedia is the only possible corpus that can be used here for at least two reasons: first, in order to extract relevant arguments, we need a large corpus of a definitional nature; second, we need wide-coverage semantic annotations of filling arguments. 3.2 Disambiguation of Filling Arguments The objective of the second step is to disambiguate as many arguments in Lπ as possible for the lex2We will also refer to l as the sense of a in sentence s. 1223 a full [[bottle]] of milk a nice hot [[cup]] of milk a cold [[glass]] of milk a very big bottle of milk a brand of milk a constituent of milk Table 1: An excerpt of the token sequences which match the lexical predicate a * of milk in Wikipedia (filling argument shown in the second column; following the Wikipedia convention we provide links in double square brackets). ical predicate π. We denote Dπ = {(a, s, l) : l ̸= ϵ} ⊆Lπ as the set of those arguments originally linked to the corresponding Wikipedia page (like the top three linked arguments in Table 1). Therefore, in the rest of this section we will focus only on the remaining triples (a, s, ϵ) ∈Uπ, where Uπ = Lπ \Dπ, i.e., those triples whose arguments are not semantically annotated. Our goal is to replace ϵ with an appropriate sense, i.e., page, for a. For each such triple (a, s, ϵ) ∈Uπ, we apply the following disambiguation heuristics: • One sense per page: if another occurrence of a in the same Wikipedia page (independent of the lexical predicate) is linked to a page l, then remove (a, s, ϵ) from Uπ and add (a, s, l) to Dπ. In other words, we propagate an existing annotation of a in the same Wikipedia page and apply it to our ambiguous item. For instance, cup of coffee appears in the Wikipedia page Energy drink in the sentence “[. . . ] energy drinks contain more caffeine than a strong cup of coffee”, but this occurrence of coffee is not linked. However the second paragraph contains the sentence “[[Coffee]], tea and other naturally caffeinated beverages are usually not considered energy drinks”, where coffee is linked to the Coffee page. This heuristic naturally reflects the broadly known assumption about lexical ambiguity presented in (Yarowsky, 1995), namely the one-sense-per-discourse heuristic. • One sense per lexical predicate: if ∃(a, s′, l) ∈Dπ, then remove (a, s, ϵ) from Uπ and add (a, s, l) to Dπ. If multiple senses of a are available, choose the most frequent one in Dπ. For example, in the page Singaporean cuisine the occurrence of coffee in the sentence “[. . . ] combined with a cup of coffee and a half-boiled egg” is not linked, but we have collected many other occurrences, all linked to the Coffee page, so this link gets propagated to our ambiguous item as well. This heuristic mimes the one-sense-percollocation heuristic presented in (Yarowsky, 1995). • Trust the inventory: if Wikipedia provides only one sense for a, i.e., only one page title whose lemma is a, link a to that page. Consider the instance “At that point, Smith threw down a cup of Gatorade” in page Jimmy Clausen; there is only one sense for Gatorade in Wikipedia, so we link the unannotated occurrence to it. As a result, the initial set of disambiguated arguments in Dπ is augmented with all those triples for which any of the above three heuristics apply. Note that Dπ might contain the same argument several times, occurring in different sentences and linked many times to the same page or to different pages. Notably, the discovery of new links is made through one scan of Wikipedia per heuristic. The three disambiguation strategies, applied in the same order as presented above, contribute to promoting the most relevant sense for a given word. Finally, let A be the set of arguments in Dπ, i.e., A := {a : ∃(a, s, l) ∈Dπ}. For each argument a ∈A we select the majority sense sense(a) of a and collect the corresponding set of sentences sent(a) marked with that sense. Formally, sense(a) := arg maxl |{(x, y, z) ∈Dπ : x = a∧z = l}| and sent(a) := {s : (a, s, sense(a)) ∈ Dπ}. 3.3 Generalization to Semantic Classes Our final objective is to generalize the annotated arguments to semantic classes picked out from a fixed set C of classes. As explained below, we assume the set C to be made up of representative synsets from WordNet. We perform this in two substeps: we first link all our disambiguated arguments to WordNet (Section 3.3.1) and then leverage the WordNet taxonomy to populate the semantic classes in C (Section 3.3.2). 3.3.1 Linking to WordNet So far the arguments in Dπ have been semantically annotated with the Wikipedia pages they refer to. However, using Wikipedia as our sense inventory is not desirable; in fact, contrarily to other commonly used lexical-semantic networks such as WordNet, Wikipedia is not formally organized in a structured, taxonomic hierarchy. While it is true that attached to each Wikipedia page there are one or more categories, these categories just provide shallow information about the class the page 1224 belongs to. Indeed, categories are not ideal for representing the semantic classes of a Wikipedia page for at least three reasons: i) many categories do not express taxonomic information (e.g., the English page Albert Einstein provides categories such as DEATHS FROM ABDOMINAL AORTIC ANEURYSM and INSTITUTE FOR ADVANCED STUDY FACULTY); ii) categories are mostly structured in a directed acyclic graph with multiple parents per category (even worse, cycles are possible in principle); iii) there is no clear way of identifying core semantic classes from the large set of available categories. Although efforts towards the taxonomization of Wikipedia categories do exist in the literature (Ponzetto and Strube, 2011; Nastase and Strube, 2013), the results are of a lower quality than a hand-built lexical resource. Therefore, as was done in previous work (Mihalcea and Moldovan, 2001; Ciaramita and Altun, 2006; Izquierdo et al., 2009; Erk and McCarthy, 2009; Huang and Riloff, 2010), we pick out our semantic classes C from WordNet and leverage its manually-curated taxonomy to associate our arguments with the most suitable class. This way we avoid building a new taxonomy and shift the problem to that of projecting the Wikipedia pages – associated with annotated filling arguments – to synsets in WordNet. We address this problem in two steps: Wikipedia-WordNet mapping. We exploit an existing mapping implemented in BabelNet (Navigli and Ponzetto, 2012), a wide-coverage multilingual semantic network that integrates Wikipedia and WordNet.3 Based on a disambiguation algorithm, BabelNet establishes a mapping µ : Wikipages →Synsets which links about 50,000 pages to their most suitable WordNet senses.4 Mapping extension. Nevertheless, BabelNet is able to solve the problem only partially, because it still leaves the vast majority of the 4 million English Wikipedia pages unmapped. This is mainly due to the encyclopedic nature of most pages, which do not have a counterpart in the WordNet dictionary. To address this issue, for each unmapped Wikipedia page p we obtain its textual definition as the first sentence of the page.5 Next, 3http://babelnet.org 4We follow (Navigli, 2009) and denote with wi p the i-th sense of w in WordNet with part of speech p. 5According to the Wikipedia guidelines, “The article should begin with a short declarative sentence, answering two questions for the nonspecialist reader: What (or who) is the subject? and Why is this subject notable?”, extracted from http://en.wikipedia.org/wiki/ we extract the hypernym from the textual definition of p by applying Word-Class Lattices (Navigli and Velardi, 2010, WCL6), a domain-independent hypernym extraction system successfully applied to taxonomy learning from scratch (Velardi et al., 2013) and freely available online (Faralli and Navigli, 2013). If a hypernym h is successfully extracted and h is linked to a Wikipedia page p′ for which µ(p′) is defined, then we extend the mapping by setting µ(p) := µ(p′). For instance, the mapping provided by BabelNet does not provide any link for the page Peter Spence; thanks to WCL, though, we are able to set the page Journalist as its hypernym, and link it to the WordNet synset journalist1 n. This way our mapping extension now covers 539,954 pages, i.e., more than an order of magnitude greater than the number of pages originally covered by the BabelNet mapping. 3.3.2 Populating the Semantic Classes We now proceed to populating the semantic classes in C with the annotated arguments obtained for the lexical predicate π. Definition 3 (semantic class of a synset). The semantic class for a WordNet synset S is the class c among those in C which is the most specific hypernym of S according to the WordNet taxonomy. For instance, given the synset tap water1 n, its semantic class is water1 n (while the other more general subsumers in C are not considered, e.g., compound2 n, chemical1 n, liquid3 n, etc). For each argument a ∈A for which a Wikipedia-to-WordNet mapping µ(sense(a)) could be established as a result of the linking procedure described above, we associate a with the semantic class of µ(sense(a)). For example, consider the case in which a is equal to tap water and sense(a) is equal to the Wikipedia page Tap water, in turn mapped to tap water1 n via µ; we thus associate tap water with its semantic class water1 n. If more than one class can be found we add a to each of them.7 Ultimately, for each class c ∈C, we obtain a set support(c) made up of all the arguments a ∈A associated with c. For instance, support(beverage1 n) = { chinese tea, 3.2% beer, hot cocoa, cider, ..., orange juice }. Note that, thanks to our extended mapping (cf. Section 3.3.1), the support of a class can also contain arguments not covered in WordNet (e.g., hot cocoa and tejuino). Wikipedia:Writing_better_articles. 6http://lcl.uniroma1.it/wcl 7This can rarely happen due to multiple hypernyms available in WordNet for the same synset. 1225 Pclass(c|π) c support(c) 0.1896 wine1 n wine, sack, white wine, red wine, wine in china, madeira wine, claret, kosher wine 0.1805 coffee1 n turkish coffee, drip coffee, espresso, coffee, cappucino, caff`e latte, decaffeinated coffee, latte 0.1143 herb2 n green tea, indian tea, black tea, orange pekoe tea, tea 0.1104 water1 n water, seawater 0.0532 beverage1 n chinese tea, 3.2% beer, orange soda, boiled water, hot chocolate, hot cocoa, tejuino, cider, beverage, cocoa, coffee milk, lemonade, orange juice 0.0403 milk1 n skim milk, milk, cultured buttermilk, whole milk 0.0351 beer1 n 3.2% beer, beer 0.0273 alcohol1 n mead, umeshu, kava, rice wine, j¨agermeister, kvass, sake, gin, rum 0.0182 poison1 n poison Table 2: Highest-probability semantic classes for the lexical predicate π = cup of *, according to our set C of semantic classes. Since not all classes are equally relevant to the lexical predicate π, we estimate the conditional probability of each class c ∈C given π on the basis of the number of sentences which contain an argument in that class. Formally: Pclass(c|π) = P a∈support(c) |sent(a)| Z , (1) where Z is a normalization factor calculated as Z = P c′∈C P a∈support(c′) |sent(a)|. As an example, in Table 2 we show the highest-probability classes for the lexical predicate cup of ∗. As a result of the probabilistic association of each semantic class c with a target lexical predicate w1 w2 . . . wi ∗wi+1 . . . wn, we obtain a semantic predicate w1 w2 . . . wi c wi+1 . . . wn. 3.4 Classification of new arguments Once the semantic predicates for the input lexical predicate π have been learned, we can classify a new filling argument a of π. However, the class probabilities calculated with Formula 1 might not provide reliable scores for several classes, including unseen ones whose probability would be 0. To enable wide coverage we estimate a second conditional probability based on the distributional semantic profile of each class. To do this, we perform three steps: 1. For each WordNet synset S we create a distributional vector ⃗S summing the noun occurrences within all the Wikipedia pages p such that µ(p) = S. Next, we create a distributional vector for each class c ∈C as follows: ⃗c = P S∈desc(c) ⃗S, where desc(c) is the set of all synsets which are descendants of the semantic class c in WordNet. As a result we obtain a predicate-independent distributional description for each semantic class in C. 2. Now, given an argument a of a lexical predicate π, we create a distributional vector ⃗a by summing the noun occurrences of all the sentences s such that (a, s, l) ∈Lπ (cf. Section 3.1). 3. Let Ca be the set of candidate semantic classes for argument a, i.e., Ca contains the semantic classes for the WordNet synsets of a as well as the semantic classes associated with µ(p) for all Wikipedia pages p whose lemma is a. For each candidate class c ∈Ca, we determine the cosine similarity between the distributional vectors ⃗c and ⃗a as follows: sim(⃗c,⃗a) = ⃗c · ⃗a ||⃗c|| ||⃗a||. Then, we determine the most suitable semantic class c ∈Ca of argument a as the class with the highest distributional probability, estimated as: Pdistr(c|π, a) = sim(⃗c,⃗a) P c′∈Ca sim(⃗c ′,⃗a). (2) We can now choose the most suitable class c ∈ Ca for argument a which maximizes the probability mixture of the distributional probability in Formula 2 and the class probability in Formula 1: P(c|π, a) = αPdistr(c|π, a)+(1−α)Pclass(c|π), (3) where α ∈[0, 1] is an interpolation factor. We now illustrate the entire process of our algorithm on a real example. Given a textual expression such as virus replicate, we: (i) extract all the filling arguments of the lexical predicate * replicate; (ii) link and disambiguate the extracted filling arguments; (iii) query our system for the available virus semantic classes (i.e., {virus1 n, virus3 n}); (iv) build the distributional vectors for 1226 the candidate semantic classes and the given input argument; (v) calculate the probability mixture. As a result we obtain the following ranking, virus1 n:0.250, virus3 n:0.000894, so that the first sense of virus in WordNet 3.0 is preferred, being an “ultramicroscopic infectious agent that replicates itself only within cells of living hosts”. 4 Experiment 1: Oxford Lexical Predicates We evaluate on the two forms of output produced by SPred: (i) the top-ranking semantic classes of a lexical predicate, as obtained with Formula 1, and (ii) the classification of a lexical predicate’s argument with the most suitable semantic class, as produced using Formula 3. For both evaluations, we use a lexical predicate dataset built from the Oxford Advanced Learner’s Dictionary (Crowther, 1998). 4.1 Set of Semantic Classes The selection of which semantic classes to include in the set C is of great importance. In fact, having too many classes will end up in an overly finegrained inventory of meanings, whereas an excessively small number of classes will provide little discriminatory power. As our set C of semantic classes we selected the standard set of 3,299 core nominal synsets available in WordNet.8 However, our approach is flexible and can be used with classes of an arbitrary level of granularity. 4.2 Datasets The Oxford Advanced Learner’s Dictionary provides usage notes that contain typical predicates in various semantic domains in English, e.g., Traveling.9 Each predicate is made up of a fixed part (e.g., a verb) and a generalizable part which contains one or more nouns. Examples include fix an election/the vote, bacteria/microbes/viruses spread, spend money/savings/a fortune. In the case that more than one noun was provided, we split the textual expression into as many items as the number of nouns. For instance, from spend money/savings/a fortune we created three items in our dataset, i.e., spend money, spend savings, spend a fortune. The splitting procedure generated 6,220 instantiated lexical predicate items overall. 8http://wordnetcode.princeton.edu/ standoff-files/core-wordnet.txt 9http://oald8.oxfordlearnersdictionaries. com/usage_notes/unbox_colloc/ k Prec@k Correct Total 1 0.94 46 49 2 0.87 85 98 3 0.86 124 145 4 0.83 160 192 5 0.82 194 237 6 0.81 228 282 7 0.80 261 326 8 0.78 288 370 9 0.77 318 414 10 0.76 349 458 11 0.75 379 502 12 0.75 411 546 13 0.75 445 590 14 0.76 479 634 15 0.75 510 678 16 0.75 544 721 17 0.76 577 763 18 0.76 612 806 19 0.76 643 849 20 0.75 671 892 Table 3: Precision@k for ranking the semantic classes of lexical predicates. 4.3 Evaluating the Semantic Class Ranking Dataset. Given the above dataset, we generalized each item by pairing its fixed verb part with * (i.e., we keep “verb predicates” only, since they are more informative). For instance, the three items bacteria/microbes/viruses spread were generalized into the lexical predicate * spread. The total number of different lexical predicates obtained was 1,446, totaling 1,429 distinct verbs (note that the dataset might contain the lexical predicate * spread as well as spread *).10 Methodology. For each lexical predicate we calculated the conditional probability of each semantic class using Formula 1, resulting in a ranking of semantic classes. To evaluate the top ranking classes, we calculated precision@k, with k ranging from 1 to 20, by counting all applicable classes as correct, e.g., location1 n is a valid semantic class for travel to * while emotion1 n is not. Results. We show in Table 3 the precision@k calculated over a random sample of 50 lexical predicates.11 As can be seen, while the classes quality is pretty high with low values of k, performance gradually degrades as we let k increase. This is mostly due to the highly polysemous nature of the predicates selected (e.g., occupy *, leave *, help *, attain *, live *, etc.). We note that high performance, attaining above 80%, can be achieved 10The low number of items per predicate is due to the original Oxford resource. 11One lexical predicate did not have any semantic class ranking. 1227 by focusing up to the first 7 classes output by our system, with a 94% precision@1. 4.4 Evaluating Classification Performance Dataset. Starting from the lexical predicate items obtained as described in Section 4.2, we selected those items belonging to a random sample of 20 usage notes among those provided by the Oxford dictionary, totaling 3,245 items. We then manually tagged each item’s argument (e.g., virus in viruses spread) with the most suitable semantic class (e.g., virus1 n), obtaining a gold standard dataset for the evaluation of our argument classification algorithm (cf. Section 3.4). Methodology. In this second evaluation we measure the accuracy of our method at assigning the most suitable semantic class to the argument of a lexical predicate item in our gold standard. We use three customary measures to determine the quality of the acquired semantic classes, i.e., precision, recall and F1. Precision is the number of items which are assigned the correct class (as evaluated by a human) over the number of items which are assigned a class by the system. Recall is the number of items which are assigned the correct class over the number of items to be classified. F1 is the harmonic mean of precision and recall. Tuning. The only parameter to be tuned is the factor α that we use to mix the two probabilities in Formula 3 (cf. Section 3.4). For tuning α we used a held-out set of 8 verbs, randomly sampled from the lexical predicates not used in the dataset. We created a tuning set using the annotated arguments in Wikipedia for these verbs: we trained the model on 80% of the annotated lexical predicate arguments (i.e., the class probability estimates in Formula 1) and then applied the probability mixture (i.e., Formula 3) for classifying the remaining 20% of arguments. Finally, we calculated the performance in terms of precision, recall and F1 with 11 different values of α ∈{0, 0.1, . . . , 1.0}, achieving optimal performance with α = 0.2. Results. Table 4 shows the results on the semantic class assignments. Our system shows very high precision, above 85%, while at the same time attaining an adequate 68% recall. We also compared against a random baseline that randomly selects one out of all the candidate semantic classes for each item, achieving only moderate results. A subsequent error analysis revealed the common types of error produced by our system: terms for which we could not provide (1) any WordNet concept Method Precision Recall F1 SPred 85.61 68.01 75.80 Random 40.96 40.96 40.96 Table 4: Performance on semantic class assignment. (e.g., political corruption) or (2) any candidate semantic class (e.g., immune system). 4.5 Disambiguation heuristics impact As a follow-up analysis, for each dataset we considered the impact of each disambiguation heuristic described in Section 3.2 according to how many times it was triggered. Starting from the entire set of 1,446 lexical predicates from the Oxford dictionary (see Section 4.3), we counted the number of argument triples (a, s, l) already disambiguated in Wikipedia (i.e., l ̸= ϵ) and those disambiguated thanks to our disambiguation strategies. Table 5 shows the statistics. We note that, while the amount of originally linked arguments is very low (about 2.5% of total), our strategies are able to considerably increase the size of the initial set of linked instances. The most effective strategies appear to be the One sense per page and the Trust the inventory, which contribute 26.16% and 31.33% of the total links, respectively. Even though most of the triples (i.e., 68 out of almost 74 million) remain unlinked, the ratio of distinct arguments which we linked to WordNet is considerably higher, calculated as 3,723,979 linked arguments over 12,431,564 distinct arguments, i.e., about 30%. 5 Experiment 2: Comparison with Kozareva & Hovy (2010) Due to the novelty of the task carried out by SPred, the resulting output may be compared with only a limited number of existing approaches. The most similar approach is that of Kozareva and Hovy (2010, K&H) who assign supertypes to the arguments of arbitrary relations, a task which resembles our semantic predicate ranking. We therefore performed a comparison on the quality of the most highly-ranked supertypes (i.e., semantic classes) using their dataset of 24 relation patterns (i.e., lexical predicates). Dataset. The dataset contained 14 lexical predicates (e.g., work for * or * fly to), 10 of which were expanded in order to semantify their left- and right-side arguments (e.g., * work for and work for *); for the remaining 4 predicates just a single 1228 Total Linked in One sense One sense per Trust the Not triples Wikipedia per page lexical predicate inventory linked 73,843,415 1,795,608 1,433,634 533,946 1,716,813 68,363,414 Table 5: Statistics on argument triple linking for all the lexical predicates in the Oxford dataset. k Prec@k Correct Total 1 0.88 21 24 2 0.90 43 48 3 0.88 63 72 4 0.89 85 96 5 0.91 109 120 6 0.91 131 144 7 0.92 154 168 8 0.91 175 192 9 0.92 198 216 10 0.92 221 240 11 0.92 242 264 12 0.92 264 288 13 0.91 284 312 14 0.90 304 336 15 0.91 327 360 16 0.91 348 384 17 0.90 367 408 18 0.89 386 432 19 0.89 407 456 20 0.89 429 480 Table 6: Precision@k for the semantic classes of the relations of Kozareva and Hovy (2010). side was generalized (e.g., * dress). While most of the relations apply to persons as a supertype, our method could find arguments for each of them. Methodology. We carried out the same evaluation as in Section 4.3. We calculated precision@k of the semantic classes obtained for each relation in the dataset of K&H. Because the set of applicable classes was potentially unbounded, we were not able to report recall directly. Results. K&H reported an overall accuracy of the top-20 supertypes of 92%. As can be seen in Table 6 we exhibit very good performance with increasing values of k. A comparison of Table 3 with Table 6 shows considerable differences in performance between the two datasets. We attribute this difference to the higher average WordNet polysemy of the verbal component of the Oxford predicates (on average 2.64 senses for K&H against 6.52 for the Oxford dataset). Although we cannot report recall, we list the number of Wikipedia arguments and associated classes in Table 7, which provides an estimate of the extraction capability of SPred. The large number of classes found for the arguments demonstrates the ability of our method to generalize to a variety of semantic classes. Predicate Number of args Number of classes cause * 181,401 1,339 live in * 143,628 600 go to * 134,712 867 * cause 92,160 1,244 work in * 79,444 770 * go to 71,794 746 * live in 61,074 541 work on * 58,760 840 work for * 58,332 681 work at * 31,904 511 * work in 24,933 528 * celebrate 23,333 408 Table 7: Number of arguments and associated classes for the 12 most frequent lexical predicates of Kozareva and Hovy (2010) extracted by SPred from Wikipedia. 6 Related work The availability of Web-scale corpora has led to the production of large resources of relations (Etzioni et al., 2005; Yates et al., 2007; Wu and Weld, 2010; Carlson et al., 2010; Fader et al., 2011). However, these resources often operate purely at the lexical level, providing no information on the semantics of their arguments or relations. Several studies have examined adding semantics through grouping relations into sets (Yates and Etzioni, 2009), ontologizing the arguments (Chklovski and Pantel, 2004), or ontologizing the relations themselves (Moro and Navigli, 2013). However, analysis has largely been either limited to ontologizing a small number of relation types with a fixed inventory, which potentially limits coverage, or has used implicit definitions of semantic categories (e.g., clusters of arguments), which limits interpretability. For example, Mohamed et al. (2011) use the semantic categories of the NELL system (Carlson et al., 2010) to learn roughly 400 valid ontologized relations from over 200M web pages, whereas WiSeNet (Moro and Navigli, 2012) leverages Wikipedia to acquire relation synsets for an open set of relations. Despite these efforts, no large-scale resource has existed to date that contains ontologized lexical predicates. In contrast, the present work provides a high-coverage method for learning argument supertypes from a broadcoverage ontology (WordNet), which can potentially be leveraged in relation extraction to ontolo1229 gize relation arguments. Our method for identifying the different semantic classes of predicate arguments is closely related to the task of identifying selectional preferences. The most similar approaches to it are taxonomybased ones, which leverage the semantic types of the relations arguments (Resnik, 1996; Li and Abe, 1998; Clark and Weir, 2002; Pennacchiotti and Pantel, 2006). Nevertheless, despite their high quality sense-tagged data, these methods have often suffered from lack of coverage. As a result, alternative approaches have been proposed that eschew taxonomies in favor of rating the quality of potential relation arguments (Erk, 2007; Chambers and Jurafsky, 2010) or generating probability distributions over the arguments (Rooth et al., 1999; Pantel et al., 2007; Bergsma et al., 2008; Ritter et al., 2010; S´eaghdha, 2010; Bouma, 2010; Jang and Mostow, 2012) in order to obtain higher coverage of preferences. In contrast, we overcome the data sparsity of class-based models by leveraging the large quantity of collaboratively-annotated Wikipedia text in order to connect predicate arguments with their semantic class in WordNet using BabelNet (Navigli and Ponzetto, 2012); because we map directly to WordNet synsets, we provide a more readilyinterpretable collocation preference model than most similarity-based or probabilistic models. Verb frame extraction (Green et al., 2004) and predicate-argument structure analysis (Surdeanu et al., 2003; Yakushiji et al., 2006) are two areas that are also related to our work. But their generality goes beyond our intentions, as we focus on semantic predicates, which is much simpler and free from syntactic parsing. Another closely related work is that of Hanks (2013) concerning the Theory of Norms and Exploitations, where norms (exploitations) represent expected (unexpected) classes for a given lexical predicate. Although our semantified predicates do, indeed, provide explicit evidence of norms obtained from collective intelligence and would provide support for this theory, exploitations present a more difficult task, different from the one addressed here, due to its focus on identifying property transfer between the semantic class and the exploited instance. The closest technical approach to ours is that of Kozareva and Hovy (2010), who use recursive patterns to induce semantic classes for the arguments of relational patterns. Whereas their approach requires both a relation pattern and one or more seeds, which bias the types of semantic classes that are learned, our proposed method requires only the pattern itself, and as a result is capable of learning an unbounded number of different semantic classes. 7 Conclusions In this paper we present SPred, a novel approach to large-scale harvesting of semantic predicates. In order to semantify lexical predicates we exploit the wide coverage of Wikipedia to extract and disambiguate lexical predicate occurrences, and leverage WordNet to populate the semantic classes with suitable predicate arguments. As a result, we are able to ontologize lexical predicate instances like those available in existing dictionaries (e.g., break a toe) into semantic predicates (such as break a BODY PART). For each lexical predicate (such as break ∗), our method produces a probability distribution over the set of semantic classes (thus covering the different expected meanings for the filling arguments) and is able to classify new instances with the most suitable class. Our experiments show generally high performance, also in comparison with previous work on argument supertyping. We hope that our semantic predicates will enable progress in different Natural Language Processing tasks such as Word Sense Disambiguation (Navigli, 2009), Semantic Role Labeling (F¨urstenau and Lapata, 2012) or even Textual Entailment (Stern and Dagan, 2012) – each of which is in urgent need of reliable semantics. While we focused on semantifying lexical predicates, as future work we will apply our method to the ontologization of large amounts of sequences of words, such as phrases or textual relations (e.g., considering Google n-grams appearing in Wikipedia). Notably, our method should, in principle, generalize to any semantically-annotated corpus (e.g., Wikipedias in other languages), provided lexical predicates can be extracted with associated semantic classes. In order to support future efforts we are releasing our semantic predicates as a freely available resource.12 Acknowledgments The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. Thanks go to David A. Jurgens, Silvia Necs¸ulescu, Stefano Faralli and Moreno De Vincenzi for their help. 12http://lcl.uniroma1.it/spred 1230 References Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2012. Learning entailment relations by global graph structure optimization. Computational Linguistics, 38(1):73–111. Shane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proc. of EMNLP, pages 59–68, Stroudsburg, PA, USA. Christian Bizer, Jens Lehmann, Georgi Kobilarov, S¨oren Auer, Christian Becker, Richard Cyganiak, and Sebastian Hellmann. 2009. DBpedia - a crystallization point for the Web of Data. Web Semantics, 7(3):154–165. Gerlof Bouma. 2010. Collocation Extraction beyond the Independence Assumption. In Proc. of ACL, Short Papers, pages 109–114, Uppsala, Sweden. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka, and Tom M. Mitchell. 2010. Toward an architecture for never-ending language learning. In Proc. of AAAI, pages 1306–1313, Atlanta, Georgia. Nathanael Chambers and Dan Jurafsky. 2010. Improving the use of pseudo-words for evaluating selectional preferences. In Proc. of ACL, pages 445–453, Stroudsburg, PA, USA. Tim Chklovski and Patrick Pantel. 2004. VerbOcean: Mining the Web for fine-grained semantic verb relations. In Proc. of EMNLP, pages 33–40, Barcelona, Spain. Jennifer Chu-Carroll and John Prager. 2007. An experimental study of the impact of information extraction accuracy on semantic search performance. In Proc. of CIKM, pages 505–514, Lisbon, Portugal. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-Coverage Sense Disambiguation and Information Extraction with a Supersense Sequence Tagger. In Proc. of EMNLP, pages 594–602, Sydney, Australia. Stephen Clark and David Weir. 2002. Class-based probability estimation using a semantic hierarchy. Computational Linguistics, 28(2):187–206. Jonathan Crowther, editor. 1998. Oxford Advanced Learner’s Dictionary. Cornelsen & Oxford, 5th edition. Flavio De Benedictis, Stefano Faralli, and Roberto Navigli. 2013. GlossBoot: Bootstrapping multilingual domain glossaries from the Web. In Proc. of ACL, Sofia, Bulgaria. Gerard de Melo and Gerhard Weikum. 2010. MENTA: Inducing Multilingual Taxonomies from Wikipedia. In Proc. of CIKM, pages 1099–1108, New York, NY, USA. Katrin Erk and Diana McCarthy. 2009. Graded word sense assignment. In Proc. of EMNLP, pages 440– 449, Stroudsburg, PA, USA. Katrin Erk. 2007. A Simple, Similarity-based Model for Selectional Preferences. In Proc. of ACL, pages 216–223, Prague, Czech Republic. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: an experimental study. Artificial Intelligence, 165(1):91–134. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying Relations for Open Information Extraction. In Proc. of EMNLP, pages 1535–1545, Edinburgh, UK. Stefano Faralli and Roberto Navigli. 2013. A Java framework for multilingual definition and hypernym extraction. In Proc. of ACL, Comp. Volume, Sofia, Bulgaria. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. David A. Ferrucci, Eric W. Brown, Jennifer ChuCarroll, James Fan, David Gondek, Aditya Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John M. Prager, Nico Schlaefer, and Christopher A. Welty. 2010. Building Watson: an overview of the DeepQA project. AI Magazine, 31(3):59–79. Hagen F¨urstenau and Mirella Lapata. 2012. Semisupervised semantic role labeling via structural alignment. Computational Linguistics, 38(1):135– 171. Roxana Girju, Adriana Badulescu, and Dan Moldovan. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proc. of HLT-NAACL, pages 1–8, Edmonton, Canada. Rebecca Green, Bonnie J. Dorr, and Philip Resnik. 2004. Inducing Frame Semantic Verb Classes from WordNet and LDOCE. In Proc. of ACL, pages 375– 382, Barcelona, Spain. Patrick Hanks. 2013. Lexical Analysis: Norms and Exploitations. University Press Group Limited. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of COLING, pages 539–545, Nantes, France. Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. Yago2: A spatially and temporally enhanced knowledge base from wikipedia. Artificial Intelligence, 194:28–61. Eduard H. Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semistructured content and artificial intelligence: The story so far. Artificial Intelligence, 194:2–27. Ruihong Huang and Ellen Riloff. 2010. Inducing Domain-Specific Semantic Class Taggers from (Almost) Nothing. In Proc. of ACL, pages 275–285, Uppsala, Sweden. Sean P. Igo and Ellen Riloff. 2009. Corpus-based semantic lexicon induction with Web-based corroboration. In Proc. of UMSLLS, pages 18–26, Stroudsburg, PA, USA. Rub´en Izquierdo, Armando Su´arez, and German Rigau. 2009. An Empirical Study on Class-Based Word Sense Disambiguation. In Proc. of EACL, pages 389–397, Athens, Greece. Hyeju Jang and Jack Mostow. 2012. Inferring selectional preferences from part-of-speech n-grams. In Proc. of EACL, pages 377–386, Stroudsburg, PA, USA. 1231 Boris Katz, Jimmy J. Lin, Daniel Loreto, Wesley Hildebrandt, Matthew W. Bilotti, Sue Felshin, Aaron Fernandes, Gregory Marton, and Federico Mora. 2003. Integrating Web-based and Corpus-based Techniques for Question Answering. In Proc. of TREC, pages 426–435, Gaithersburg, Maryland. Zornitsa Kozareva and Eduard Hovy. 2010. Learning Arguments and Supertypes of Semantic Relations Using Recursive Patterns. In Proc. of ACL, pages 1482–1491, Uppsala, Sweden. Zornitsa Kozareva, Ellen Riloff, and Eduard H. Hovy. 2008. Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs. In Proc. ACL/HLT, pages 1048–1056, Columbus, Ohio. Sebastian Krause, Hong Li, Hans Uszkoreit, and Feiyu Xu. 2012. Large-scale learning of relationextraction rules with distant supervision from the web. In Proc. of ISWC 2012, Part I, pages 263–278, Boston, MA. Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the MDL principle. Computational Linguistics, 24(2):217–244. Rada Mihalcea and Dan Moldovan. 2001. eXtended WordNet: Progress report. In Proceedings of the NAACL-01 Workshop on WordNet and Other Lexical Resources, pages 95–100, Pittsburgh, Penn. Thahir Mohamed, Estevam Hruschka, and Tom Mitchell. 2011. Discovering Relations between Noun Categories. In Proc. of EMNLP, pages 1447– 1455, Edinburgh, Scotland, UK. Andrea Moro and Roberto Navigli. 2012. WiSeNet: Building a Wikipedia-based semantic network with ontologized relations. In Proc. of CIKM, pages 1672–1676, Maui, HI, USA. Andrea Moro and Roberto Navigli. 2013. Integrating Syntactic and Semantic Analysis into the Open Information Extraction Paradigm. In Proc. of IJCAI, Beijing, China. Vivi Nastase and Michael Strube. 2013. Transforming wikipedia into a large scale multilingual concept network. Artificial Intelligence, 194:62–85. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Roberto Navigli and Paola Velardi. 2010. Learning Word-Class Lattices for Definition and Hypernym Extraction. In Proc. of ACL, pages 1318–1327, Uppsala, Sweden. Roberto Navigli. 2009. Word Sense Disambiguation: A survey. ACM Computing Surveys, 41(2):1–69. Patrick Pantel, Rahul Bhagat, Timothy Chklovski, and Eduard Hovy. 2007. ISP: learning inferential selectional preferences. In Proc. of NAACL, pages 564– 571, Rochester, NY. Marius Pasca. 2004. Acquisition of categorized named entities for web search. In Proc. of CIKM, pages 137–145, New York, NY, USA. Marco Pennacchiotti and Patrick Pantel. 2006. Ontologizing semantic relations. In Proc. of COLING, pages 793–800, Sydney, Australia. Simone Paolo Ponzetto and Michael Strube. 2011. Taxonomy induction based on a collaboratively built knowledge repository. Artificial Intelligence, 175(910):1737–1756. Philip Resnik. 1996. Selectional constraints: An information-theoretic model and its computational realization. Cognition, 61(1):127–159. Alan Ritter, Mausam, and Oren Etzioni. 2010. A latent dirichlet allocation method for selectional preferences. In Proc. of ACL, pages 424–434, Uppsala, Sweden. ACL. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Proc. of ACL, pages 104–111, Stroudsburg, PA, USA. Diarmuid O S´eaghdha. 2010. Latent variable models of selectional preference. In Proc. of ACL, pages 435–444, Uppsala, Sweden. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2004. Learning Syntactic Patterns for Automatic Hypernym Discovery. In NIPS, pages 1297–1304, Cambridge, Mass. Asher Stern and Ido Dagan. 2012. Biutee: A modular open-source system for recognizing textual entailment. In Proc. of ACL 2012, System Demonstrations, pages 73–78, Jeju Island, Korea. Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proc. ACL, pages 8–15, Stroudsburg, PA, USA. M. Thelen and E. Riloff. 2002. A Bootstrapping Method for Learning Semantic Lexicons using Extraction Pattern Contexts. In Proc. of EMNLP, pages 214–221, Salt Lake City, UT, USA. Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. OntoLearn Reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3). Yorick Wilks. 1975. A preferential, pattern-seeking, semantics for natural language inference. Artificial Intelligence, 6(1):53–74. Fei Wu and Daniel S. Weld. 2010. Open Information Extraction Using Wikipedia. In Proc. of ACL, pages 118–127, Uppsala, Sweden. Akane Yakushiji, Yusuke Miyao, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2006. Automatic construction of predicate-argument structure patterns for biomedical information extraction. In Proc. of EMNLP, pages 284–292, Stroudsburg, PA, USA. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proc. of ACL, pages 189–196, Cambridge, MA, USA. Alexander Yates and Oren Etzioni. 2009. Unsupervised methods for determining object and relation synonyms on the web. Journal of Artificial Intelligence Research, 34(1):255. Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: open information extraction on the web. In Proc. of NAACLDemonstrations, pages 25–26, Stroudsburg, PA, USA. 1232
2013
120
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1233–1242, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Towards Robust Abstractive Multi-Document Summarization: A Caseframe Analysis of Centrality and Domain Jackie Chi Kit Cheung University of Toronto 10 King’s College Rd., Room 3302 Toronto, ON, Canada M5S 3G4 [email protected] Gerald Penn University of Toronto 10 King’s College Rd., Room 3302 Toronto, ON, Canada M5S 3G4 [email protected] Abstract In automatic summarization, centrality is the notion that a summary should contain the core parts of the source text. Current systems use centrality, along with redundancy avoidance and some sentence compression, to produce mostly extractive summaries. In this paper, we investigate how summarization can advance past this paradigm towards robust abstraction by making greater use of the domain of the source text. We conduct a series of studies comparing human-written model summaries to system summaries at the semantic level of caseframes. We show that model summaries (1) are more abstractive and make use of more sentence aggregation, (2) do not contain as many topical caseframes as system summaries, and (3) cannot be reconstructed solely from the source text, but can be if texts from in-domain documents are added. These results suggest that substantial improvements are unlikely to result from better optimizing centrality-based criteria, but rather more domain knowledge is needed. 1 Introduction In automatic summarization, centrality has been one of the guiding principles for content selection in extractive systems. We define centrality to be the idea that a summary should contain the parts of the source text that are most similar or representative of the source text. This is most transparently illustrated by the Maximal Marginal Relevance (MMR) system of Carbonell and Goldstein (1998), which defines the summarization objective to be a linear combination of a centrality term and a non-redundancy term. Since MMR, much progress has been made on more sophisticated methods of measuring centrality and integrating it with non-redundancy (See Nenkova and McKeown (2011) for a recent survey). For example, term weighting methods such as the signature term method of Lin and Hovy (2000) pick out salient terms that occur more often than would be expected in the source text based on frequencies in a background corpus. This method is a core component of the most successful summarization methods (Conroy et al., 2006). While extractive methods based on centrality have thus achieved success, there has long been recognition that abstractive methods are ultimately more desirable. One line of work is in text simplification and sentence fusion, which focus on the ability of abstraction to achieve a higher compression ratio (Knight and Marcu, 2000; Barzilay and McKeown, 2005). A less examined issue is that of aggregation and information synthesis. A key part of the usefulness of summaries is that they provide some synthesis or analysis of the source text and make a more general statement that is of direct relevance to the user. For example, a series of related events can be aggregated and expressed as a trend. The position of this paper is that centrality is not enough to make substantial progress towards abstractive summarization that is capable of this type of semantic inference. Instead, summarization systems need to make more use of domain knowledge. We provide evidence for this in a series of studies on the TAC 2010 guided summarization data set that examines how the behaviour of automatic summarizers can or cannot be distinguished from human summarizers. First, we confirm that abstraction is a desirable goal, and 1233 provide a quantitative measure of the degree of sentence aggregation in a summarization system. Second, we show that centrality-based measures are unlikely to lead to substantial progress towards abstractive summarization, because current topperforming systems already produce summaries that are more “central” than humans do. Third, we consider how domain knowledge may be useful as a resource for an abstractive system, by showing that key parts of model summaries can be reconstructed from the source plus related in-domain documents. Our contributions are novel in the following respects. First, our analyses are performed at the level of caseframes, rather at the level of words or syntactic dependencies as in previous work. Caseframes are shallow approximations of semantic roles which are well suited to characterizing a domain by its slots. Furthermore, we take a developmental rather than evaluative perspective—our goal is not to develop a new evaluation measure as defined by correlation with human responsiveness judgments. Instead, our studies reveal useful criteria with which to distinguish human-written and system summaries, helping to guide the development of future summarization systems. 2 Related Work Domain-dependent template-based summarization systems have been an alternative to extractive systems which make use of rich knowledge about a domain and information extraction techniques to generate a summary, possibly using a natural language generation system (Radev and McKeown, 1998; White et al., 2001; McKeown et al., 2002). This paper can be seen as a first step towards reconciling the advantages of domain knowledge with the resource-lean extraction approaches popular today. As noted above, Lin and Hovy’s (2000) signature terms have been successful in discovering terms that are specific to the source text. These terms are identified by a log-likelihood ratio test based on their relative frequencies in relevant and irrelevant documents. They were originally proposed in the context of single-document summarization, where they were calculated using indomain (relevant) vs. out-of-domain (irrelevant) text. In multi-document summarization, the indomain text has been replaced by the source text cluster (Conroy et al., 2006), thus they are now used as a form of centrality-based features. In this paper, we use guided summarization data as an opportunity to reopen the investigation into the effect of domain, because multiple document clusters from the same domain are available. Summarization evaluation is typically done by comparing system output to human-written model summaries, and are validated by their correlation with user responsiveness judgments. The comparison can be done at the word level, as in ROUGE (Lin, 2004), at the syntactic level, as in Basic Elements (Hovy et al., 2006), or at the level of summary content units, as in the Pyramid method (Nenkova and Passonneau, 2004). There are also automatic measures which do not require model summaries, but compare against the source text instead (Louis and Nenkova, 2009; Saggion et al., 2010). Several studies complement this paper by examining the best possible extractive system using current evaluation measures, such as ROUGE (Lin and Hovy, 2003; Conroy et al., 2006). They find that the best possible extractive systems score higher or as highly than human summarizers, but it is unclear whether this means the oracle summaries are actually as useful as human ones in an extrinsic setting. Genest et al. (2009) ask humans to create extractive summaries, and find that they score in between current automatic systems and human-written abstracts on responsiveness, linguistic quality, and Pyramid score. In the lecture domain, He et al. (1999; 2000) find that lecture transcripts that have been manually highlighted with key points improve students’ quiz scores more than when using automated summarization techniques or when providing only the lecture transcript or slides. Jing and McKeown (2000) manually analyzed 30 human-written summaries, and find that 19% of sentences cannot be explained by cut-and-paste operations from the source text. Saggion and Lapalme (2002) similarly define a list of transformations necessary to convert source text to summary text, and manually analyzed their frequencies. Copeck and Szpakowicz (2004) find that at most 55% of vocabulary items found in model summaries occur in the source text, but they do not investigate where the other vocabulary items might be found. 1234 Sentence: At one point, two bomb squad trucks sped to the school after a backpack scare. Dependencies: num(point, one) prep at(sped, point) num(trucks, two) nn(trucks, bomb) nn(trucks, squad) nsubj(sped, trucks) root(ROOT, sped) det(school, the) prep to(sped, school) det(scare, a) nn(scare, backpack) prep after(sped, scare) Caseframes: (speed, prep at) (speed, nsubj) (speed, prep to) (speed, prep after) Table 1: A sentence decomposed into its dependency edges, and the caseframes derived from those edges that we consider (in black). 3 Theoretical basis of our analysis Many existing summarization evaluation methods rely on word or N-gram overlap measures, but these measures are not appropriate for our analysis. Word overlap can occur due to shared proper nouns or entity mentions. Good summaries should certainly contain the salient entities in the source text, but when assessing the effect of the domain, different domain instances (i.e., different document clusters in the same domain) would be expected to contain different salient entities. Also, the realization of entities as noun phrases depends strongly on context, which would confound our analysis if we do not also correctly resolve coreference, a difficult problem in its own right. We leave such issues to other work (Nenkova and McKeown, 2003, e.g.). Domains would rather be expected to share slots (a.k.a. aspects), which require a more semantic level of analysis that can account for the various ways in which a particular slot can be expressed. Another consideration is that the structures to be analyzed should be extracted automatically. Based on these criteria, we selected caseframes to be the appropriate unit of analysis. A caseframe is a shallow approximation of the semantic role structure of a proposition-bearing unit like a verb, and are derived from the dependency parse of a sentence1. 1Note that caseframes are distinct from (though directly Relation Caseframe Pair Sim. Degree (kill, dobj) (wound, dobj) 0.82 Causal (kill, dobj) (die, nsubj) 0.80 Type (rise, dobj) (drop, prep to) 0.81 Figure 1: Sample pairs of similar caseframes by relation type, and the similarity score assigned to them by our distributional model. In particular, they are (gov, role) pairs, where gov is a proposition-bearing element, and role is an approximation of a semantic role with gov as its head (See Figure 1 for examples). Caseframes do not consider the dependents of the semantic role approximations. The use of caseframes is well grounded in a variety of NLP tasks relevant to summarization such as coreference resolution (Bean and Riloff, 2004), and information extraction (Chambers and Jurafsky, 2011), where they serve the central unit of semantic analysis. Related semantic representations are popular in Case Grammar and its derivative formalisms such as frame semantics (Fillmore, 1982). We use the following algorithm to extract caseframes from dependency parses. First, we extract those dependency edges with a relation type of subject, direct object, indirect object, or prepositional object (with the preposition indicated), along with their governors. The governor must be a verb, event noun (as defined by the hyponyms of the WordNet EVENT synset), or nominal or adjectival predicate. Then, a series of deterministic transformations are applied to the syntactic relations to account for voicing alternations, control, raising, and copular constructions. 3.1 Caseframe Similarity Direct caseframe matches account for some variation in the expression of slots, such as voicing alternations, but there are other reasons different caseframes may indicate the same slot (Figure 1). For example, (kill, dobj) and (wound, dobj) both indicate the victim of an attack, but differ by the degree of injury to the victim. (kill, dobj) and (die, nsubj) also refer to a victim, but are linked by a causal relation. (rise, dobj) and inspired by) the similarly named case frames of Case Grammar (Fillmore, 1968). 1235 (drop, prep to) on the other hand simply share a named entity type (in this case, numbers). To account for these issues, we measure caseframe similarity based on their distributional similarity in a large training corpus. First, we construct vector representations of each caseframe, where the dimensions of the vector correspond to the lemma of the head word that fills the caseframe in the training corpus. For example, kicked the ball would result in a count of 1 added to the caseframe (kick, dobj) for the context word ball. Then, we rescale the counts into pointwise mutual information values, which has been shown to be more effective than raw counts at detecting semantic relatedness (Turney, 2001). Similarity between caseframes can then be compared by cosine similarity between the their vector representations. For training, we use the AFP portion of the Gigaword corpus (Graff et al., 2005), which we parsed using the Stanford parser’s typed dependency tree representation with collapsed conjunctions (de Marneffe et al., 2006). For reasons of sparsity, we only considered caseframes that appear at least five times in the guided summarization corpus, and only the 3000 most common lemmata in Gigaword as context words. 3.2 An Example To illustrate how caseframes indicate the slots in a summary, we provide the following fragment of a model summary from TAC about the Unabomber trial: (1) In Sacramento, Theodore Kaczynski faces a 10-count federal indictment for 4 of the 16 mail bomb attacks attributed to the Unabomber in which two people were killed. If found guilty, he faces a death penalty. ... He has pleaded innocent to all charges. U.S. District Judge Garland Burrell Jr. presides in Sacramento. All of the slots provided by TAC for the Investigations and Trials domain can be identified by one or more caseframes. The DEFENDANT can be identified by (face, nsubj), and (plead, nsubj); the CHARGES by (face, dobj); the REASON by (indictment, prep for); the SENTENCE by (face, dobj); the PLEAD by (plead, dobj); and the INVESTIGATOR by (preside, nsubj). 4 Experiments We conducted our experiments on the data and results of the TAC 2010 summarization workshop. This data set contains 920 newspaper articles in 46 topics of 20 documents each. Ten are used in an initial guided summarization task, and ten are used in an update summarization task, in which a summary must be produced assuming that the original ten documents had already been read. All summaries have a word length limit of 100 words. We analyzed the results of the two summarization tasks separately in our experiments. The 46 topics belong to five different categories or domains: Accidents and natural disasters, Criminal or terrorist attacks, Health and safety, Endangered resources, and Investigations and trials. Each domain is associated with a template specifying the type of information that is expected in the domain, such as the participants in the event or the time that the event occurred. In our study, we compared the characteristics of summaries generated by the eight human summarizers with those generated by the peer summaries, which are basically extractive systems. There are 43 peer summarization systems, including two baselines defined by NIST. We refer to systems by their ID given by NIST, which are alphabetical for the human summarizers (A to H), and numeric for the peer summarizers (1 to 43). We removed two peer systems (systems 29 and 43) which did not generate any summary text in the workshop, presumably due to software problems. For each measure that we consider, we compare the average among the human-written summaries to the three individual peer systems, which we chose in order to provide a representative sample of the average and best performance of the automatic systems according to current evaluation methods. These systems are all primarily extractive, like most of the systems in the workshop: Peer average The average of the measure among the 41 peer summarizers. Peer 16 This system scored the highest in responsiveness scores on the original summarization task and in ROUGE-2, responsiveness, and Pyramid score in the update task. Peer 22 This system scored the highest in ROUGE-2 and Pyramid score in the original summarization task. 1236 1 42825121015313618428143037331319241621404132738173523393422 7 620261132 5 9G2 FBEADCH System IDs 0.0 0.5 1.0 1.5 2.0 Number of sentences (a) Initial guided summarization task 281311842410153624251213163034273 9 814373340 7 1139381935262317222163241 5 20FG2 ECBAHD System IDs 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 Number of sentences (b) Update summarization task Figure 2: Average sentence cover size: the average number of sentences needed to generate the caseframes in a summary sentence (Study 1). Model summaries are shown in darker bars. Peer system numbers that we focus on are in bold. Condition Initial Update Model average 1.58 1.57 Peer average 1.06 1.06 Peer 1 1.00 1.00 Peer 16 1.04 1.04 Peer 22 1.08 1.09 Table 2: The average number of source text sentences needed to cover a summary sentence. The model average is statistically significantly different from all the other conditions p < 10−7 (Study 1). Peer 1 The NIST-defined baseline, which is the leading sentence baseline from the most recent document in the source text cluster. This system scored the highest on linguistic quality in both tasks. 4.1 Study 1: Sentence aggregation We first confirm that human summarizers are more prone to sentence aggregation than system summarizers, showing that abstraction is indeed a desirable goal. To do so, we propose a measure to quantify the degree of sentence aggregation exhibited by a summarizer, which we call average sentence cover size. This is defined to be the minimum number of sentences from the source text needed to cover all of the caseframes found in a summary sentence (for those caseframes that can be found in the source text at all), averaged over all of the summary sentences. Purely extractive systems would thus be expected to score 1.0, as would systems that perform text compression by removing constituents of a source text sentence. Human summarizers would be expected to score higher, if they actually aggregate information from multiple points in the source text. To illustrate, suppose we assign arbitrary indices to caseframes, a summary sentence contains caseframes {1, 2, 3, 4, 5}, and the source text contains three sentences with caseframes, which can be represented as a nested set {{1, 3, 4}, {2, 5, 6}, {1, 4, 7}}. Then, the summary sentence can be covered by two sentences from the source text, namely {{1, 3, 4}, {2, 5, 6}}. This problem is actually an instance of the minimum set cover problem, in which sentences are sets, and caseframes are set elements. Minimum set cover is NP-hard in general, but the standard integer programming formulation of set cover sufficed for our data set; we used ILOG CPLEX 12.4’s mixed integer programming mode to solve all the set cover problems optimally. Results Figure 2 shows the ranking of the summarizers by this measure. Most peer systems have a low average sentence cover size of close to 1, which reflects the fact that they are purely or almost purely extractive. Human model summarizers show a higher degree of aggregation in their summaries. The averages of the tested conditions are shown in Table 2, and are statistically significant. Peer 2 shows a relatively high level of aggregation despite being an extractive system. Upon inspection of its summaries, it appears that Peer 2 tends to select many datelines, and without punctuation to separate them from the rest of the summary, our automatic analysis tools incorrectly merged many sentences together, resulting in incorrect parses and novel caseframes not found in 1237 A32B1242273733G1 5 28 7 392 EFH352615CD112093614194013168304 61031841213424172531222338 System IDs 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Per word density (a) Initial guided summarization task EAGB37133C122726423911H28F152D3220355407 41081914303641183 921243413222516311762338 System IDs 0.00 0.02 0.04 0.06 0.08 0.10 Per word density (b) Update summarization task Figure 3: Density of signature caseframes (Study 2). Topic: Unabomber trial (charge, dobj), (kill, dobj), (trial, prep of), (bombing, prep in) Topic: Mangrove forests (beach, prep of), (save, dobj) (development, prep of), (recover, nsubj) Topic: Bird Flu (infect, prep with), (die, nsubj) (contact, dobj), (import, prep from) Figure 4: Examples of signature caseframes found in Study 2. the source text. 4.2 Study 2: Signature caseframe density Study 1 shows that human summarizers are more abstractive in that they aggregate information from multiple sentences in the source text, but how is this aggregation performed? One possibility is that human summary writers are able to pack a greater number of salient caseframes into their summaries. That is, humans are fundamentally relying on centrality just as automatic summarizers do, and are simply able to achieve higher compression ratios by being more succinct. If this is true, then sentence fusion methods over the source text alone might be able to solve the problem. Unfortunately, we show that this is false and that system summaries are actually more central than model ones. To extract topical caseframes, we use Lin and Hovy’s (2000) method of calculating signature terms, but extend the method to apply it at the caseframe rather than the word level. We follow Lin and Hovy (2000) in using a significance Condition Initial Update Model average 0.065 0.052 Peer average 0.080∗ 0.072∗ Peer 1 0.066 0.050 Peer 16 0.083∗ 0.085∗ Peer 22 0.101∗ 0.084∗ Table 3: Signature caseframe densities for different sets of summarizers, for the initial and update guided summarization tasks (Study 2). ∗: p < 0.005. threshold of 0.001 to determine signature caseframes2. Figure 4 shows examples of signature caseframes for several topics. Then, we calculate the signature caseframe density of each of the summarization systems. This is defined to be the number of signature caseframes in the set of summaries divided by the number of words in that set of summaries. Results Figure 3 shows the density for all of the summarizers, in ascending order of density. As can be seen, the human abstractors actually tend to use fewer signature caseframes in their summaries than automatic systems. Only the leading baseline is indistinguishable from the model average. Table 3 shows the densities for the conditions that we described earlier. The differences in density between the human average and the non-baseline conditions are highly statistically significant, according to paired two-tailed Wilcoxon signed-rank tests for the statistic calculated for each topic cluster. These results show that human abstractors do 2We tried various other thresholds, but the results were much the same. 1238 Threshold 0.9 0.8 Condition Init. Up. Init. Up. Model average 0.066 0.052 0.062 0.047 Peer average 0.080 0.071 0.071 0.063 Peer 1 0.068 0.050 0.060 0.044 Peer 16 0.083 0.086 0.072 0.077 Peer 22 0.100 0.086 0.084 0.075 Table 4: Density of signature caseframes after merging to various threshold for the initial (Init.) and update (Up.) summarization tasks (Study 2). not merely repeat the caseframes that are indicative of a topic cluster or use minor grammatical alternations in their summaries. Rather, a genuine sort of abstraction or distillation has taken place, either through paraphrasing or semantic inference, to transform the source text into the final informative summary. Merging Caseframes We next investigate whether simple paraphrasing could account for the above results; it may be the case that human summarizers simply replace words in the source text with synonyms, which can be detected with distributional similarity. Thus, we merged similar caseframes into clusters according to the distributional semantic similarity defined in Section 3.1, and then repeated the previous experiment. We chose two relatively high levels of similarity (0.8 and 0.9), and used complete-link agglomerative (i.e., bottom-up) clustering to merge similar caseframes. That is, each caseframe begins as a separate cluster, and the two most similar clusters are merged at each step until the desired similarity threshold is reached. Cluster similarity is defined to be the minimum similarity (or equivalently, maximum distance) between elements in the two clusters; that is, maxc∈C1,c′∈C2 −sim(c, c′). Complete-link agglomerative clustering tends to form coherent clusters where the similarity between any pair within a cluster is high (Manning et al., 2008). Cluster Results Table 4 shows the results after the clustering step, with similarity thresholds of 0.9 and 0.8. Once again, model summaries contain a lower density of signature caseframes. The statistical significance results are unchanged. This indicates that simple paraphrasing alone cannot account for the difference in the signature caseframe densities, and that some deeper abstraction or semantic inference has occurred. Note that we are not claiming that a lower density of signature caseframes necessarily correlates with a more informative summary. For example, some automatic summarizers are comparable to the human abstractors in their relatively low density of signature caseframes, but these turn out to be the lowest performing summarization systems by all measures in the workshop, and they are unlikely to rival human abstractors in any reasonable evaluation of summary informativeness. It does, however, appear that further optimizing centralitybased measures alone is unlikely to produce better informative summaries, even if we analyze the summary at a syntactic/semantic rather than lexical level. 4.3 Study 3: Summary Reconstruction The above studies show that the higher degree of abstraction in model summaries cannot be explained by better compression of topically salient caseframes alone. We now switch perspectives to ask how model summaries might be automatically generated at all. We will show that they cannot be reconstructed solely from the source text, extending Copeck and Szpakowicz (2004)’s result to caseframes. However, we also show that if articles from the same domain are added, reconstruction then becomes possible. Our measure of whether a model summary can be reconstructed is caseframe coverage. We define this to be the proportion of caseframes in a summary that is contained by some reference set. This is thus a score between 0 and 1. Unlike in the previous study, we use the full set of caseframes, not just signature caseframes, because our goal now to create a hypothesis space from which it is in principle possible to generate the model summaries. Results We first calculated caseframe coverage with respect to the source text alone (Figure 5). As expected, automatic systems show close to perfect coverage, because of their basically extractive nature, while model summaries show much lower coverage. These statistics are summarized by Table 5. These results present a fundamental limit to extractive systems, and also text simplification and sentence fusion methods based solely on the source text. The Impact of Domain Knowledge How might automatic summarizers be able to acquire these 1239 AGEBHFCD38172322063940 5 93414233519733411211372642212732428104 81316313025221151836 System IDs 0.0 0.2 0.4 0.6 0.8 1.0 Coverage (a) Initial guided summarization task GABEHCFD23817321141392035192621 5 23143740274212254 633 7 83022311024133415281 3 9161836 System IDs 0.0 0.2 0.4 0.6 0.8 1.0 Coverage (b) Update summarization task Figure 5: Coverage of summary text caseframes in source text (Study 3). Condition Initial Update Model average 0.77 0.75 Peer average 0.99 0.99 Peer 1 1.00 1.00 Peer 16 1.00 1.00 Peer 22 1.00 1.00 Table 5: Coverage of caseframes in summaries with respect to the source text. The model average is statistically significantly different from all the other conditions p < 10−8 (Study 3). caseframes from other sources? Traditional systems that perform semantic inference do so from a set of known facts about the domain in the form of a knowledge base, but as we have seen, most extractive summarization systems do not make much use of in-domain corpora. We examine adding in-domain text to the source text to see how this would affect coverage. Recall that the 46 topics in TAC 2010 are categorized into five domains. To calculate the impact of domain knowledge, we add all the documents that belong in the same domain to the source text to calculate coverage. To ensure that coverage does not increase simply due to increasing the size of the reference set, we compare to the baseline of adding the same number of documents that belong to another domain. As shown in Table 6, the effect of adding more in-domain text on caseframe coverage is substantial, and noticeably more than using out-of-domain text. In fact, nearly all caseframes can be found in the expanded set of articles. The implication of this result is that it may be possible to generate better summaries by mining in-domain text for relevant caseframes. Reference corpus Initial Update Source text only 0.77 0.75 +out-of-domain 0.91 0.91 +in-domain 0.98 0.97 Table 6: The effect on caseframe coverage of adding in-domain and out-of-domain documents. The difference between adding in-domain and outof-domain text is significant p < 10−3 (Study 3). 5 Conclusion We have presented a series of studies to distinguish human-written informative summaries from the summaries produced by current systems. Our studies are performed at the level of caseframes, which are able to characterize a domain in terms of its slots. First, we confirm that model summaries are more abstractive and aggregate information from multiple source text sentences. Then, we show that this is not simply due to summary writers packing together source text sentences containing topical caseframes to achieve a higher compression ratio, even if paraphrasing is taken into account. Indeed, model summaries cannot be reconstructed from the source text alone. However, our results are also positive in that we find that nearly all model summary caseframes can be found in the source text together with some indomain documents. Current summarization systems have been heavily optimized towards centrality and lexicalsemantical reasoning, but we are nearing the bottom of the barrel. Domain inference, on the other hand, and a greater use of in-domain documents as a knowledge source for domain inference, are very promising indeed. Mining useful caseframes 1240 for a sentence fusion-based approach has the potential, as our experiments have shown, to deliver results in just the areas where current approaches are weakest. Acknowledgements This work is supported by the Natural Sciences and Engineering Research Council of Canada. References Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3):297– 328. David Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 335–336. ACM. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 976– 986, Portland, Oregon, USA, June. Association for Computational Linguistics. John M. Conroy, Judith D. Schlesinger, and Dianne P. O’Leary. 2006. Topic-focused multi-document summarization using an approximate oracle score. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 152–159, Sydney, Australia, July. Association for Computational Linguistics. Terry Copeck and Stan Szpakowicz. 2004. Vocabulary agreement among model summaries and source documents. In Proceedings of the 2004 Document Understanding Conference (DUC). Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In In LREC 2006. Charles Fillmore. 1968. The case for case. In E. Bach and R. T. Harms, editors, Universals in Linguistic Theory, pages 1–88. Holt, Reinhart, and Winston, New York. Charles J. Fillmore. 1982. Frame semantics. Linguistics in the Morning Calm, pages 111–137. Pierre-Etienne Genest, Guy Lapalme, and Mehdi Yousfi-Monod. 2009. Hextac: the creation of a manual extractive run. In Proceedings of the Second Text Analysis Conference, Gaithersburg, Maryland, USA. National Institute of Standards and Technology. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2005. English gigaword second edition. Linguistic Data Consortium, Philadelphia. Liwei He, Elizabeth Sanocki, Anoop Gupta, and Jonathan Grudin. 1999. Auto-summarization of audio-video presentations. In Proceedings of the Seventh ACM International Conference on Multimedia. ACM. Liwei He, Elizabeth Sanocki, Anoop Gupta, and Jonathan Grudin. 2000. Comparing presentation summaries: slides vs. reading vs. listening. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’00, pages 177– 184, New York, NY, USA. ACM. Eduard Hovy, Chin-Yew Lin, Liang Zhou, and Junichi Fukumoto. 2006. Automated summarization evaluation with Basic Elements. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), pages 899–902. IBM. IBM ILOG CPLEX Optimization Studio V12.4. Hongyan Jing and Kathleen R. McKeown. 2000. Cut and paste based text summarization. In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference, pages 178–185. Kevin Knight and Daniel Marcu. 2000. Statisticsbased summarization-step one: Sentence compression. In Proceedings of the National Conference on Artificial Intelligence. Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of the 18th Conference on Computational Linguistics - Volume 1, COLING ’00, pages 495–501, Stroudsburg, PA, USA. Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. 2003. The potential and limitations of automatic sentence extraction for summarization. In Proceedings of the HLT-NAACL 03 on Text Summarization Workshop. Association for Computational Linguistics. Chin Y. Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Stan Szpakowicz and Marie-Francine Moens, editors, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain, July. Association for Computational Linguistics. Annie Louis and Ani Nenkova. 2009. Automatically evaluating content selection in summarization without human models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language 1241 Processing. Association for Computational Linguistics. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch¨utze, 2008. Introduction to Information Retrieval, chapter 17. Cambridge University Press. Kathleen R. McKeown, Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and summarizing news on a daily basis with Columbia’s Newsblaster. In Proceedings of the Second International Conference on Human Language Technology Research, pages 280–285. Morgan Kaufmann Publishers Inc. Ani Nenkova and Kathleen McKeown. 2003. References to named entities: a corpus study. In Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers. Association for Computational Linguistics. Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2):103–233. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, volume 2004, pages 145–152. Dragomir R. Radev and Kathleen R. McKeown. 1998. Generating natural language summaries from multiple on-line sources. Computational Linguistics, 24(3):470–500. Horacio Saggion and Guy Lapalme. 2002. Generating indicative-informative summaries with SumUM. Computational linguistics, 28(4):497–526. Horacio Saggion, Juan-Manuel Torres-Moreno, Iria Cunha, and Eric SanJuan. 2010. Multilingual summarization evaluation without human models. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1059– 1067. Association for Computational Linguistics. Peter Turney. 2001. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In Proceedings of the Twelth European Conference on Machine Learning (ECML-2001), pages 491–502. Michael White, Tanya Korelsky, Claire Cardie, Vincent Ng, David Pierce, and Kiri Wagstaff. 2001. Multidocument summarization via information extraction. In Proceedings of the First International Conference on Human Language Technology Research. Association for Computational Linguistics. 1242
2013
121
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1243–1253, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics HEADY: News headline abstraction through event pattern clustering Enrique Alfonseca Google Inc. [email protected] Daniele Pighin Google Inc. [email protected] Guillermo Garrido∗ NLP & IR Group at UNED [email protected] Abstract This paper presents HEADY: a novel, abstractive approach for headline generation from news collections. From a web-scale corpus of English news, we mine syntactic patterns that a Noisy-OR model generalizes into event descriptions. At inference time, we query the model with the patterns observed in an unseen news collection, identify the event that better captures the gist of the collection and retrieve the most appropriate pattern to generate a headline. HEADY improves over a state-of-theart open-domain title abstraction method, bridging half of the gap that separates it from extractive methods using humangenerated titles in manual evaluations, and performs comparably to human-generated headlines as evaluated with ROUGE. 1 Introduction Motivation. News events are rarely reported only in one way, from a single point of view. Different news agencies will interpret the event in different ways; various countries or locations may highlight different aspects of it depending on how they are affected; and opinions and in-depth analyses will be written after the fact. The variety of contents and styles is both an opportunity and a challenge. On the positive side, we have the same events described in different ways; this redundancy is useful for summarization, as the information content reported by the majority of news sources most likely represents the central part of the event. On the other hand, variability and subjectivity can be difficult to isolate. For some applications it is important to understand, given a collection of related news articles and re∗Work done during an internship at Google Zurich. • Carmelo and La La Party It Up with Kim and Ciara • La La Vazquez and Carmelo Anthony: Wedding Day Bliss • Carmelo Anthony, actress LaLa Vazquez wed in NYC • Stylist to the Stars • LaLa, Carmelo Set Off Celebrity Wedding Weekend • Ciara rocks a sexy Versace Spring 2010 mini to LaLa Vasquez and Carmelo Anthony’s wedding (photos) • Lala Vasquez on her wedding dress, cake, reality tv show and fianc´e, Carmelo Anthony (video) • VAZQUEZ MARRIES SPORTS STAR ANTHONY • Lebron Returns To NYC For Carmelo’s Wedding • Carmelo Anthony’s stylist dishes on the wedding • Paul pitching another Big Three with “Melo in NYC” • Carmelo Anthony and La La Vazquez Get Married at Star-Studded Wedding Ceremony Table 1: Headlines observed for a news collection reporting the same wedding event. ports, how to formulate in an objective way what has happened. As a motivating example, Table 1 shows the different headlines observed in news reporting the wedding between basketball player Carmelo Anthony and actress LaLa Vazquez. As can be seen, there is a wide variety of ways to report the same event, including different points of view, highlighted aspects, and opinionated statements on the part of the reporter. When presenting this event to a user in a news-based information retrieval or recommendation system, different event descriptions may be more appropriate. For example, a user may only be interested in objective, informative summaries without any interpretation on the part of the reporter. In this case, Carmelo Anthony, ac1243 tress LaLa Vazquez wed in NYC would be a good choice. Goal. Our final goal in this research is to build a headline generation system that, given a news collection, is able to describe it with the most compact, objective and informative headline. In particular, we want the system to be able to: • Generate headlines in an open-domain, unsupervised way, so that it does not need to rely on training data which is expensive to produce. • Generalize across synonymous expressions that refer to the same event. • Do so in an abstractive fashion, to enforce novelty, objectivity and generality. In order to advance towards this goal, this paper explores the following questions: • What is a good way of using syntactic patterns to represent events for generating headlines? • Can we have satisfactory readability with an open-domain abstractive approach, not relying on training data nor on manually predefined generation templates? • How far can we get in terms of informativeness, compared to the human-produced headlines, i.e., extractive approaches? Contributions. In this paper we present HEADY, which is at the same time a novel system for abstractive headline generation, and a smooth clustering of patterns describing the same events. HEADY is fully open-domain and can scale to web-sized data. By learning to generalize events across the boundaries of a single news story or news collection, HEADY produces compact and effective headlines that objectively convey the relevant information. When compared to a state-of-the-art opendomain headline abstraction system (Filippova, 2010), the new headlines are statistically significantly better both in terms of readability and informativeness. Also, automatic evaluations using ROUGE, having objective headlines for the news as references, show that the abstractive headlines are on par with human-produced headlines. 2 Related work Headline generation and summarization. Most headline generation work in the past has focused on the problem of single-document summarization: given the main passage of a single news article, generate a very short summary of the article. From early in the field, it was pointed out that a purely extractive approach is not good enough to generate headlines from the body text (Banko et al., 2000). Sometimes the most important information is distributed across several sentences in the document. More importantly, quite often, the single sentence selected as the most informative for the news collection is already longer than the desired headline size. For this reason, most early headline generation work focused on either extracting and reordering n-grams from the document to be summarized (Banko et al., 2000), or extracting one or two informative sentences from the document and performing linguistically-motivated transformations to them in order to reduce the summary length (Dorr et al., 2003). The first approach is not guaranteed to produce grammatical headlines, whereas the second approach is tightly tied to the actual wording found in the document. Single-document headline generation was also explored at the Document Understanding Conferences between 2002 and 20041. In later years, there has been more interest in problems such as sentence compression (Galley and McKeown, 2007; Clarke and Lapata, 2008; Cohn and Lapata, 2009; Napoles et al., 2011; Berg-Kirkpatrick et al., 2011), text simplification (Zhu et al., 2010; Coster and Kauchak, 2011; Woodsend and Lapata, 2011) and sentence fusion (Barzilay and McKeown, 2005; Wan et al., 2007; Filippova and Strube, 2008; Elsner and Santhanam, 2011). All of them have direct applications for headline generation, as it can be construed as selecting one or a few sentences from the original document(s), and then reducing them to the target title size. For example, Wan et al. (2007) generate novel utterances by combining Prim’s maximum-spanning-tree algorithm with an n-gram language model to enforce fluency. Unlike HEADY, the method by Wan and colleagues is an extractive method that can summarize single documents into a sentence, as opposed to generating a sentence that can stand for a whole collec1http://duc.nist.gov/ 1244 tion of news. Filippova (2010) reports a system that is very close to our settings: the input is a collection of related news articles, and the system generates a headline that describes the main event. This system uses sentence compression techniques and benefits from the redundancy in the collection. One difference with respect to HEADY is that it does not use any syntactic information aside from part-of-speech tags, and it does not require a training step. We have used this approach as a baseline for comparison. There are not many fully abstractive systems for news summarization. The few that exist, such as the work by Genest and Lapalme (2012), rely on manually written generation templates. In contrast, HEADY automatically learns the templates or headline patterns automatically, which allows it to work in open-domain settings without relying on supervision or manual annotations. Open-domain pattern learning. Pattern learning for relation extraction is an active area of research that is very related to our problem of event pattern learning for headline generation. TextRunner (Yates et al., 2007), ReVerb (Fader et al., 2011) and NELL (Carlson et al., 2010; Mohamed et al., 2011) are some examples of open-domain systems that learn surface patterns that express relations between pairs of entities. PATTY (Nakashole et al., 2012) generalizes the patterns to also include syntactic information and ontological (class membership) constraints. Our patterns are more similar to the ones used by PATTY, which also produces clusters of synonymous patterns. The main differences are that (a) HEADY is not limited to consider patterns expressing relations between pairs of entities; (b) we identify synonym patterns using a probabilistic, Bayesian approach that takes advantage of the multiplicity of news sources reporting the same events. Chambers and Jurafsky (2009) present an unsupervised method for learning narrative schemas from news, i.e., coherent sets of events that involve specific entity types (semantic roles). Similarly to them, we move from the assumptions that 1) utterances involving the same entity types within the same document (in our case, a collection of related documents) are likely describing aspects of the same event, and 2) meaningful representations of the underlying events can be learned by clustering these utterances in a principled way. Noisy-OR networks. Noisy-OR Bayesian networks (Pearl, 1988) have been applied in the past to a wide class of large-scale probabilistic inference problems, from the medical domain (Middleton et al., 1991; Jaakkola and Jordan, 1999; Onisko et al., 2001), to synthetic image-decomposition and co-citation data analysis (ˇSingliar and Hauskrecht, 2006). By assuming independence between the causes of the hidden variables, noisy-OR models tend to be reliable (Friedman and Goldszmidt, 1996) as they require a relatively small number of parameters to be estimated (linear with the size of the network). 3 Headline generation In this section, we describe the HEADY system for news headline abstraction. Our approach takes as input, for training, a corpus of news articles organized in news collections. Once the model is trained, it can generate headlines for new collections. An outline of HEADY’s main components follows (details of each component are provided in Sections 3.1, 3.2 and 3.3): Pattern extraction. Identify, in each of the news collections, syntactic patterns connecting k entities, for k ≥1. These will be the candidate patterns expressing events. Training. Train a Noisy-OR Bayesian network on the co-occurrence of syntactic patterns. Each pattern extracted in the previous step is added as an observed variable, and latent variables are used to represent the hidden events that generate patterns. An additional noise variable links to every terminal node, allowing every terminal to be generated by language background (noise) instead of by an actual event. Inference. Generate a headline from an unseen news collection. First, patterns are extracted using the pattern extraction procedure mentioned above. Given the patterns, the posterior probability of the hidden event variables is estimated. Then, from the activated hidden events, the likelihood of every pattern can be estimated, even if they do not appear in the collection. The single pattern with the maximum probability is selected to generate a new headline from it. Being the product of extranews collection generalization, the retrieved pattern is more likely to be objective and informative than patterns directly observed in the news collection. 1245 Algorithm 1 COLLECTIONTOPATTERNSΨ(N): N is a repository of news collections, Ψ is a set of parameters controlling the extraction process. R ←{} for all N ∈N do PREPROCESSDATA(N) E ←GETRELEVANTENTITIES(N′) for all Ei ←COMBINATIONSΨ(E) do for all n ∈N do P ←EXTRACTPATTERNSΨ(n, Ei) R{N, Ei} ←R{N, Ei} ∪P return R 3.1 Pattern extraction In this section we detail the process for obtaining the event patterns that constitute the building blocks of learning and inference. Patterns are extracted from a large repository N of news collections N1, . . . , N|N|. Each news collection N = {ni} is an unordered collection of related news, each of which can be seen as an ordered sequence of sentences, i.e.: n = [s0, . . . s|n|]. Algorithm 1 presents a high-level view of the pattern extraction process. The different steps are described below: PREPROCESSDATA: We start by preprocessing all the news in the news collections with a standard NLP pipeline: tokenization and sentence boundary detection (Gillick, 2009), part-of-speech tagging, dependency parsing (Nivre, 2006), coreference resolution (Haghighi and Klein, 2009) and entity linking based on Wikipedia and Freebase. Using the Freebase dataset, each entity is annotated with all its Freebase types (class labels). In the end, for each entity mentioned in the document we have a unique identifier, a list with all its mentions in the document and a list of class labels from Freebase. As a result of this process, we obtain for each sentence in the corpus a representation as exemplified in Figure 1 (1). In this example, the mentions of three distinct entities have been identified, i.e., e1, . . . , e3. In the Freebase list of types (class labels), e1 is a person and a celebrity, and e3 is a state and a location. GETRELEVANTENTITIES: For each news collection N we collect the set E of the entities mentioned most often within the collection. Next, we generate the set COMBINATIONSΨ(E) consisting NNP CC NNP TO VB IN NNP Portia and Helen to marry in California e1 e2 e3 person actress state celebrity location root cc conj nsubj aux prep pobj 1 NNP NNP e1 e2 person actress celebrity conj 2 NNP CC NNP TO VB e1 and e2 to marry person actress celebrity cc conj nsubj aux 3 NNP CC NNP TO VB person and actress to marry cc conj nsubj aux 4 NNP CC NNP TO VB celebrity and actress to marry cc conj nsubj aux Figure 1: Pattern extraction process from an annotated dependency parse. (1): an MST is extracted from the entity pair e1, e2 (2); nodes are heuristically added to the MST to enforce grammaticality (3); entity types are recombined to generate the final patterns (4). of non-empty subsets of E, without repeated entities. The number of entities to consider in each collection, and the maximum size for the subsets of entities to consider are meta-parameters embedded in Ψ.2 EXTRACTPATTERNS: For each subset of relevant entities Ei, event patterns are mined from the articles in the news collection. The process by which patterns are extracted from a news is explained in Algorithm 2 and exemplified graphically in Figure 1 (2–4). GETMENTIONNODES: Using the dependency parse T for a sentence s, we first identify the set of nodes Mi that mention the entities in Ei. If T does not contain exactly one mention of each target entity in Ei, then the sentence is ignored. Otherwise, we obtain the minimum spanning tree for the nodeset Pi, i.e., the shortest path in the dependency tree connecting all the nodes in Mi (Figure 1, 2). Pi is the set of nodes around which the patterns will be constructed. APPLYHEURISTICS: With very high probability, the MST Pi that we obtain does not constitute a grammatical or useful extrapolation of the original sentence s. For example, the MST for the en2As our objective is to generate very short titles (under 10 words), we only consider combinations of up to three elements of E. 1246 Algorithm 2 EXTRACTPATTERNSΨ(n, Ei): n is the list of sentences in a news article. Sentences are POS-tagged, dependency parsed and annotated with respect to a set of entities E ⊇Ei P ←∅ for all s ∈n[0 : 2) do T ←DEPPARSE(s) Mi ←GETMENTIONNODES(t, Ei) if ∃e ∈Ei, count(e, Mi) ̸= 1 then continue Pi ←GETMINIMUMSPANNINGTREEΨ(Mi) APPLYHEURISTICSΨ(Pi) or continue P ←P ∪COMBINEENTITYTYPESΨ(Pi) return P tity pair ⟨e1, e2⟩in the example does not provide a good description of the event as it is neither adequate nor fluent. For this reason, we apply a set of post-processing heuristic transformations that aim at including a minimal set of meaningful nodes. These include making sure that both the root of the clause and its subject appear in the extracted pattern, and that conjunctions between entities should not be dropped (Figure 1, 3). COMBINEENTITYTYPES: Finally, a distinct pattern is generated from each possible combination of entity type assignments for the participating entities. (Figure 1, 4). It is important to note that both at training and test time, for pattern extraction we only consider the title and the first sentence of the article body. The reason is that we want to limit ourselves, in each news collection, to the most relevant event reported in the collection, which appears most of the times in these two sentences. Unlike titles, first sentences do not extensively use puns or rhetorics as they tend to be grammatical and informative rather than catchy. The patterns mined from the same news collection and for the same set of entities are grouped together, and constitute the building blocks of the clustering algorithm which is described below. 3.2 Training The extracted patterns are used to learn a NoisyOR (Pearl, 1988) model by estimating the probability that each (observed) pattern activates one or many (hidden) events. Figure 2 represents the two levels: the hidden event variables at the top, and the observed pattern variables at the bottom. An additional noise variable links to every termie1 ... en noise p3 p2 p1 ... pm Figure 2: Probabilistic model. The associations between latent event variables and observed pattern variables are modeled by noisy-OR gates. Events are assumed to be marginally independent, and patterns conditionally independent given the events. nal node, allowing all terminals to be generated by language background (noise) instead of by an actual event. The associations between latent events and observed patterns are modeled by noisy-OR gates. In this model, the conditional probability of a hidden event ei given a configuration of observed patterns p ∈{0, 1}|P| is calculated as: P(ei = 0 | p) = (1 −qi0) Y j∈πj (1 −qij)pj = exp  −θi0 − X j∈πi θijpj  , where πi is the set of active events (i.e., πi = ∪j{pj} | pj = 1), and qij = P(ei = 1 | pj = 1) is the estimated probability that the observed pattern pi can, in isolation, activate the event e. The term qi0 is the so-called “noise” term of the model, and it accounts for the fact that an observed event ei might be activated by some pattern that has never been observed (Jaakkola and Jordan, 1999). In Algorithm 1, at the end of the process we group in R[N, Ei] all the patterns extracted from the same news collection N and entity sub-set Ei. These groups represent rough clusters of patterns, that we can use to bootstrap the optimization of the model parameters θij = −log(1 −qij). We initiate the training process by randomly selecting 100,000 of these groups, and optimize the weights of the model through 40 EM (Dempster et al., 1977) iterations. 3.3 Inference (generation of new headlines) Given an unseen news collection N, the inference component of HEADY generates a single headline that captures the main event reported by the news in N. In order to do so, we first need to select a 1247 single event-pattern p∗that is especially relevant for N. Having selected p∗, in order to generate a headline it is sufficient to replace the entity placeholders in p∗with the surface forms observed in N. To identify p∗, we start from the assumption that the most descriptive event encoded by N must describe an important situation in which some subset of the relevant entities E in N are involved. The basic inference algorithm is a twostep random walk in the Bayesian network. Given a set of entities E and sentences n, EXTRACTPATTERNSΨ(n, E) collects patterns involving those entities. By normalizing the frequency of the extracted patterns, we get a probability distribution over the observed variables in the network. A two-step random walk traversing to the latent event nodes and back to the pattern nodes allows us to generalize across events. We call this algorithm INFERENCE(n, E). In order to decide which is the most relevant set of events that should appear in the headline, we use the following procedure: 1. Given the set of entities E mentioned in the news collection, we consider each entity subset Ei ⊆E including up to three entities3. For each Ei, we run INFERENCE(n, Ei), which computes a distribution wi over patterns involving the entities in Ei. 2. We invoke again INFERENCE, now using at the same time all the patterns extracted for every subset of Ei ⊆E. This computes a probability distribution w over all patterns involving any admissible subset of the entities mentioned in the collection. 3. Third, we select the entity-specific distribution that approximates better the overall distribution w∗= arg max i cos(w, wi) We assume that the corresponding set of entities Ei are the most central entities in the collection and therefore any headline should make sure to mention them all. 3As we noted before, we impose this limitation to keep the generated headlines relatively short and to limit data sparsity issues. 4. Finally, we select the pattern with the highest weight in w∗as the pattern that better captures the main event reported in the news collection: p∗= pj | wj = arg max j w∗ j The headline is then produced from p∗, replacing placeholders with the entities in the document from which the pattern was extracted. While in many cases information about entity types would be sufficient to decide about the order of the entities in the generated sentences (e.g., “[person] married in [location]” for the entity set {ea = “Mr. Brown”, eb = “Los Angeles”}), in other cases class assignment can be ambiguous (e.g., “[person] killed [person]” for {ea = “Mr. A”, eb = “Mr. B”}). To handle these cases, when extracting patterns for an entity set {ea, eb}, we keep track of the alphabetical ordering of the entities, e.g., from a news collection about “Mr. B” killing “Mr. A” we would produce patterns such as “[person:2] killed [person:1]” or “[person:1] was killed by [person:2]” since ea = “Mr. A” < eb = “Mr. B”. At inference time, when we query the model with such patterns we can only activate events whose assignments are compatible with the entities observed in the text, making the replacement straightforward and unambiguous. 4 Experiment settings In our method we use patterns that are fully lexicalized (with the exception of entity placeholders) and enriched with syntactic data. Under these circumstances, the Noisy-OR can effectively generalize and learn meaningful clusters only if provided with large amounts of data. To our best knowledge, available data sets for headline generation are not large enough to support this kind of inference. For this reason, we rely on a corpus of news crawled from the web between 2008 and 2012 which have been clustered based on closeness in time and cosine similarity, using the vector-space model and tf.idf weights. News collections with less than 5 documents are discarded4, and those 4There is a very long tail of singleton articles, which do not offer useful examples of lexical or syntactic variation, and many very small collections that tend to be especially noisy, hence the decision to consider only collections with at least 5 documents. 1248 larger than 50 documents are capped, by randomly picking 50 documents from the collection5. The total number of news collections after clustering is 1.7 million. From this set, we have set aside a few hundred collections that will remain unseen until the final evaluation. As we have no development set, we have done no tuning of the parameters for pattern extraction nor for the Bayesian network training (100,000 latent variables to represent the different events, 40 EM iterations, as mentioned in Section 3.2). The EM iterations on the noisy-OR were distributed across 30 machines with 16 GB of memory each. 4.1 Systems used One of the questions we wanted to answer in this research was whether it was possible to obtain the same quality with automatically abstracted headlines as with human-generated headlines. For every news collection we have as many humangenerated headlines as documents. To decide which human-generated headline should be used in this comparison, we used three different methods that pick one of the collection headlines: • Latest headline: selects the headline from the latest document in the collection. Intuitively this should be the most relevant one for news about sport matches and competitions, where the earlier headlines offer previews and predictions, and the later headlines report who won and the final scores. • Most frequent headline: some headlines are repeated across the collection, and this method chooses the most frequent one. If there are several with the same frequency, one is taken at random6. • TopicSum: we use TopicSum (Haghighi and Vanderwende, 2009), a 3-layer hierarchical topic model, to infer the language model that is most central for the collection. The news title that has the smallest Kullback-Leibler 5Even though we did not run any experiment to find an optimal value for this parameter, 50 documents seems like a reasonable choice to avoid redundancy while allowing for considerable lexical and syntactic variation. 6The most frequent headline only has a tie in 6 collections in the whole test set. In 5 cases two headlines are tied at frequencies around 4, and in one case three headlines are tied at frequency 2. All six are large collections with 50 news articles, so this baseline is significantly different from a random baseline. R-1 R-2 R-SU4 HEADY 0.3565 0.1903 0.1966 Most frequent pattern 0.3560 0.1864 0.1959 TopicSum 0.3594 0.1821 0.1935 MSC 0.3470 0.1765 0.1855 Most frequent headline 0.3177 0.1401 0.1668 Latest headline 0.2814 0.1191 0.1425 Table 2: Results from the automatic evaluation, sorted according to the ROUGE-2 and ROUGESU4 scores. divergence with respect the collection language model is the one chosen. A headline generation system that addresses the same application as ours is (Filippova, 2010), which generates a graph from the collection sentences and selects the shortest path between the begin and the end node traversing words in the same order in which they were found in the original documents. We have used this system, called Multi-Sentence Compression (MSC), for comparisons. Finally, in order to understand whether the noisy-OR Bayesian network is useful for generalizing across patterns into latent events, we added a baseline that extracts all patterns from the test collection following the same COLLECTIONTOPATTERNS algorithm (including the application of the linguistically motivated heuristics), and then produces a headline straightaway from the most frequent pattern extracted. In other words, the only difference with respect to HEADY is that in this case no generalization through the Noisy-OR network is carried out, and that headlines are generated from patterns directly observed in the test news collections. We call this system Most frequent pattern. 4.2 Annotation activities In order to evaluate HEADY’s performance, we carried out two annotation activities. First, from the set of collections that we had set aside at the beginning, we randomly chose 50 collections for which all the systems could generate an output, and we asked raters to manually write titles for them. As this is not a simple task to be crowdsourced, for this evaluation we relied on eight trained raters. We collected between four and five reference titles for each of the fifty news collections, to be used to compare the headline 1249 Readability Informativeness TopicSum 4.86 4.63 Most freq. headline †‡4.61 †‡34.43 Latest headline †‡4.55 † 4.00 HEADY † 4.28 † 3.75 Most freq. pattern † 3.95 † 3.82 MSC 3.00 3.05 Table 3: Results from the manual evaluation. At 95% confidence, TopicSum is significantly better than all others for readability, and only indistinguishable from the most frequent pattern for informativeness. For the rest, 3 means being significantly better than HEADY, ‡ than the most frequent pattern, and † than MSC. generation methods using automatic summarization metrics. Then, we took the output of the systems for the 50 test collections and asked human raters to evaluate the headlines: 1. Raters were shown one headline and asked to rate it in terms of readability on a 5-point Likert scale. In the instructions, the raters were provided with examples of ungrammatical and grammatical titles to guide them in this annotation. 2. After the previous rating is done, raters were shown a selection of five documents from the collection, and they were asked to judge the informativeness of the previous headline for the news in the collection, again on a 5-point Likert scale. This second annotation was carried out by independent raters in a crowd-sourcing setting. The raters did not have any involvement with the inception of the model or the writing of the paper. They did not know that the headlines they were rating were generated according to different methods. We measured inter-judge agreement on the Likert-scale annotations using their IntraClass Correlation (ICC) (Cicchetti, 1994). The ICC for readability is 0.76 (0.95 confidence interval [0.71, 0.80]), and for informativeness it is 0.67 (0.95 confidence interval [0.60, 0.73]). This means strong agreement for readability, and moderate agreement for informativeness. 5 Results The COLLECTIONTOPATTERNS algorithm was run on the training set, producing a 230 million event patterns. Patterns that were obtained from the same collection and involving the same entities were grouped together, for a total of 1.7 million pattern collections. The pattern groups are used to bootstrap the Noisy-OR model training. Training the HEADY model that we used for the evaluation took around six hours on 30 cores. Table 2 shows the results of the comparison of the headline generation systems using ROUGE (R-1, R-2 and R-SU4) (Lin, 2004) with the collected references. According to Owczarzak et al. (2012), ROUGE is still a competitive metric that correlates well with human judgements for ranking summarizers. The significance tests for ROUGE are performed using bootstrap resampling and a graphical significance test (Minka, 2002). The human annotators that created the references for this evaluation were explicitly instructed to write objective titles, which is the kind of headlines that the abstractive systems aim at generating. It is common to see real headlines that are catchy, joking, or with a double meaning, and therefore they use a different vocabulary than objective titles that simply mention what happened. TopicSum sometimes selects objective titles amongst the human-made titles and that is why it also scores very well with the ROUGE scores. But the other two criteria for choosing human-made headlines select non-objective titles much more often, and this lowers their performance when measured with ROUGE with respect to the objective references. Table 3 lists the results of the manual evaluation of readability and informativeness of the generated headlines. The first result that we can see is the difference in the rankings between the two evaluations. Part of this difference might be due to the fact that ROUGE is not as good for discriminating between human-made and automatic summaries. In fact, in the DUC competitions, the gap between human summaries and automatic summaries was also more apparent in the manual evaluations than using ROUGE. Another part of the observed difference may be due to the design of the evaluation. The manual evaluation is asking raters to judge whether real, human-written titles that were actually used for those news are grammatical and informative. As could be expected, as these are published titles, the real titles score very good on the manual evaluation. Some other interesting results are: 1250 Model Generated title TopicSum Modern Family’s Eric Stonestreet laughs off Charlize Theron rumours MSC Modern Family star Eric Stonestreet is dating Charlize Theron. Latest headline Eric laughs off Theron dating rumours Frequent pattern Eric Stonestreet jokes about Charlize relationship Frequent headline Charlize Theron dating Modern Family star HEADY Eric Stonestreet not dating Charlize Theron TopicSum McFadzean rescues point for Crawley Town MSC Crawley side challenging for a point against Oldham Athletic. Latest headline Reds midfielder victim of racist tweet Frequent pattern Kyle McFadzean fired a equaliser Crawley were made Frequent headline Latics halt Crawley charge HEADY Kyle McFadzean rescues point for Crawley Town F.C. TopicSum UCI to strip Lance Armstrong of his 7 Tour titles MSC The international cycling union said today. Latest headline Letters: elderly drivers and Lance Armstrong Frequent pattern Lance Armstrong stripped of Tour de France titles Frequent headline Today in the news: third debate is tonight HEADY Lance Armstrong was stripped of Tour de France titles Table 4: A comparison of the titles generated by the different models for three news collections. • Amongst the automatic systems, HEADY performed better than MSC, with statistical significance at 95% for all the metrics. Headlines based on the most frequent patterns were better than MSC for all metrics but ROUGE-2. • The most frequent pattern baseline and HEADY have comparable performance across all the metrics (not statistically significantly different), although HEADY has slightly better scores for all metrics except for informativeness. While we do not take any step to explicitly model stylistic variation, estimating the weights of the Noisy-OR network turns out to be a very effective way of filtering out sensational wording to the advantage of plainer, more objective style. This may not clearly emerge from the evaluation, as we did not explicitly ask the raters to annotate the items based on their objectivity, but a manual inspection of the clusters suggests that the generalization is working in the right direction. Table 4 presents a selection of outputs produced by the six models for three different news collections. The first example shows a news collection containing news about a rumour that was immediately denied. In the second example, HEADY generalization improves over the most frequent pattern. In the third case, HEADY generates a good title from a noisy collection (containing different but related events). The examples also show that TopicSum is very effective in selecting a good human-generated headline for each collection. This opens the possibility of using TopicSum to automatically generate ROUGE references for future evaluations of abstractive methods. 6 Conclusions We have presented HEADY, an abstractive headline generation system based on the generalization of syntactic patterns by means of a Noisy-OR Bayesian network. We evaluated the model both automatically and through human annotations. HEADY performs significantly better than a stateof-the-art open domain abstractive model (Filippova, 2010) in all evaluations, and is in par with human-generated headlines in terms of ROUGE scores. We have shown that it is possible to achieve high quality generation of news headlines in an open-domain, unsupervised setting by successfully exploiting syntactic and ontological information. The system relies on a standard NLP pipeline, requires no manual data annotation and can effectively scale to web-sized corpora. For feature work, we plan to improve all components of HEADY in order to fill in the gap with the human-generated titles in terms of readability and informativeness. One of the directions in which we plan to move is the removal of the syntactic heuristics that currently enforce pattern wellformedness and to automatically learn the necessary transformations from the data. Two other lines of work that we plan to explore are the possibility of personalizing the headlines to user interests (as stored in user profiles or expressed as user queries), and to investigate further applications of the Bayesian network of event patterns, such as its use for relation extraction and knowledge base population. Acknowledgments The research leading to these results has received funding from: the EU’s 7th Framework Programme (FP7/2007-2013) under grant agreement number 257790; the Spanish Ministry of Science and Innovation’s project Holopedia (TIN201021128-C02); and the Regional Government of Madrid’s MA2VICMR (S2009/TIC1542). We would like to thank Katja Filippova and the anonymous reviewers for their insightful comments. 1251 References Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, ACL ’00, pages 318–325. Association for Computational Linguistics. Regina Barzilay and Kathleen R McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3):297– 328. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 481–490. Association for Computational Linguistics. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010), pages 3–3. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised Learning of Narrative Schemas and Their Participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2, pages 602–610. Domenic V Cicchetti. 1994. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4):284. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression: An integer linear programming approach. Journal of Artificial Intelligence Research, 31(1):399–429. Trevor Cohn and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research, 34:637–674. William Coster and David Kauchak. 2011. Learning to simplify sentences using Wikipedia. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, pages 1–9. Association for Computational Linguistics. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubi. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In Proceedings of the HLTNAACL 03 on Text summarization workshop-Volume 5, pages 1–8. Association for Computational Linguistics. Micha Elsner and Deepak Santhanam. 2011. Learning to fuse disparate sentences. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, pages 54–63. Association for Computational Linguistics. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1535–1545. Association for Computational Linguistics. Katja Filippova and Michael Strube. 2008. Sentence fusion via dependency graph compression. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 177–185. Association for Computational Linguistics. Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 322–330. Association for Computational Linguistics. Nir Friedman and Moises Goldszmidt. 1996. Learning Bayesian networks with local structure. In Proceedings of the Twelfth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-96), pages 252–262, San Francisco, CA. Morgan Kaufmann. Michel Galley and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 180–187. Pierre-Etienne Genest and Guy Lapalme. 2012. Fully abstractive approach to guided summarization. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, short papers. Association for Computational Linguistics. Dan Gillick. 2009. Sentence boundary detection and the problem with the us. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 241–244. Association for Computational Linguistics. Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3, pages 1152–1161. Association for Computational Linguistics. Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 362–370. Association for Computational Linguistics. 1252 Tommi S. Jaakkola and Michael I. Jordan. 1999. Variational probabilistic inference and the QMRDT Network. Journal of Artificial Intelligence Research, 10:291–322. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81. Blackford Middleton, Michael Shwe, David Heckerman, Max Henrion, Eric Horvitz, Harold Lehmann, and Gregory Cooper. 1991. Probabilistic diagnosis using a reformulation of the INTERNIST1/QMR knowledge base. I. The probabilistic model and inference algorithms. Methods of information in medicine, 30(4):241–255, October. Tom Minka. 2002. Judging Significance from Error Bars. CM U Tech R eport. Thahir P Mohamed, Estevam R Hruschka Jr, and Tom M Mitchell. 2011. Discovering relations between noun categories. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1447–1455. Association for Computational Linguistics. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: A taxonomy of relational patterns with semantic types. EMNLP12. Courtney Napoles, Chris Callison-Burch, Juri Ganitkevitch, and Benjamin Van Durme. 2011. Paraphrastic sentence compression with a character-based metric: Tightening without deletion. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, pages 84–90. Association for Computational Linguistics. Joakim Nivre. 2006. Inductive Dependency Parsing, volume 34 of Text, Speech and Language Technology. Springer. Agnieszka Onisko, Marek J. Druzdzel, and Hanna Wasyluk. 2001. Learning Bayesian network parameters from small data sets: application of Noisy-OR gates. International Journal of Approximated Reasoning, 27(2):165–182. Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012. An assessment of the accuracy of automatic evaluation in summarization. In Proceedings of the NAACL-HLT 2012 Workshop on Evaluation Metrics and System Comparison for Automatic Summarization, pages 1–9. Association for Computational Linguistics. Judea Pearl. 1988. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann. Tom´aˇs ˇSingliar and Miloˇs Hauskrecht. 2006. Noisy-or component analysis and its application to link analysis. J. Mach. Learn. Res., 7:2189–2213, December. Stephen Wan, Robert Dale, Mark Dras, and C´ecile Paris. 2007. Global Revision in Summarisation: Generating Novel Sentences with Prim’s Algorithm. In Proceedings of PACLING 2007 - 10th Conference of the Pacific Association for Computational Linguistics. Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 409–420. Association for Computational Linguistics. Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 25–26. Association for Computational Linguistics. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of The 23rd International Conference on Computational Linguistics, pages 1353–1361. 1253
2013
122
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1254–1263, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Conditional Random Fields for Responsive Surface Realisation using Global Features Nina Dethlefs, Helen Hastie, Heriberto Cuay´ahuitl and Oliver Lemon Mathematical and Computer Sciences Heriot-Watt University, Edinburgh n.s.dethlefs | h.hastie | h.cuayahuitl | [email protected] Abstract Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. This leads to more natural and less repetitive surface realisation. It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. Results from a human rating study confirm that users are sensitive to this extended notion of context and assign ratings that are significantly higher (up to 14%) than those for taking only local context into account. 1 Introduction Surface realisation typically aims to produce output that is grammatically well-formed, natural and cohesive. Cohesion can be characterised by lexical or syntactic cues such as repetitions, substitutions, ellipses, or connectives. In automatic language generation, such properties can sometimes be difficult to model, because they require rich contextawareness that keeps track of all (or much) of what was generated before, i.e. a growing generation history. In text generation, cohesion can span over the entire text. In interactive settings such as generation within a spoken dialogue system (SDS), a challenge is often to keep track of cohesion over several utterances. In addition, since interactions are dynamic, generator inputs from the dialogue manager can sometimes be partial or subject to subsequent modification. This has been addressed by work on incremental processing (Schlangen and Skantze, 2009). Since dialogue acts are passed on to the generation module as soon as possible, this can sometimes lead to incomplete generator inputs (because the user is still speaking), or inputs that are subject to later modification (because of an initial ASR mis-recognition). In this paper, we propose to formulate surface realisation as a sequence labelling task. We use conditional random fields (Lafferty et al., 2001; Sutton and McCallum, 2006), which are suitable for modelling rich contexts, in combination with semantic trees for rich linguistic information. This combination is able to keep track of dependencies between syntactic, semantic and lexical features across multiple utterances. Our model can be trained from minimally labelled data, which reduces development time and may (in the future) facilitate an application to new domains. The domain used in this paper is a pedestrian walking around a city looking for information and recommendations for local restaurants from an SDS. We describe here the module for surface realisation. Our main hypothesis is that the use of global context in a CRF with semantic trees can lead to surface realisations that are better phrased, more natural and less repetitive than taking only local features into account. Results from a human rating study confirm this hypothesis. In addition, we compare our system with alternative surface realisation methods from the literature, namely, a rank and boost approach and n-grams. Finally, we argue that our approach lends itself 1254 to surface realisation within incremental systems, because CRFs are able to model context across full as well as partial generator inputs which may undergo modifications during generation. As a demonstration, we apply our model to incremental surface realisation in a proof-of-concept study. 2 Related Work Our approach is most closely related to Lu et al. (2009) who also use CRFs to find the best surface realisation from a semantic tree. They conclude from an automatic evaluation that using CRF-based generation which takes long-range dependencies into account outperforms several baselines. However, Lu et al.’s generator does not take context beyond the current utterance into account and is thus restricted to local features. Furthermore, their model is not able to modify generation results on the fly due to new or updated inputs. In terms of surface realisation from graphical models (and within the context of SDSs), our approach is also related to work by Georgila et al. (2002) and Dethlefs and Cuay´ahuitl (2011b), who use HMMs, Dethlefs and Cuay´ahuitl (2011a) who use Bayes Nets, and Mairesse et al. (2010) who use Dynamic Bayes Nets within an Active Learning framework. The last approach is also concerned with generating restaurant recommendations within an SDS. Specifically, their system optimises its performance online, during the interaction, by asking users to provide it with new textual descriptions of concepts, for which it is unsure of the best realisation. In contrast to these related approaches, we use undirected graphical models which are useful when the natural directionality between the input variables is unknown. In terms of surface realisation for SDSs, Oh and Rudnicky (2000) present foundational work in using an n-gram-based system. They train a surface realiser based on a domain-dependent language model and use an overgeneration and ranking approach. Candidate utterances are ranked according to a penalty function which penalises too long or short utterances, repetitious utterances and utterances which either contain more or less information than required by the dialogue act. While their approach is fast to execute, it has the disadvantage of not being able to model long-range dependencies. They show that humans rank their output equivalently to template-based generation. Further, our approach is related to the SPaRKy sentence generator (Walker et al., 2007). SPaRKy was also developed for the domain of restaurant recommendations and was shown to be equivalent to or better than a carefully designed templatebased generator which had received high human ratings in the past (Stent et al., 2002). It generates sentences in two steps. First, it produces a randomised set of alternative realisations, which are then ranked according to a mapping from sentence plans to predicted human ratings using a boosting algorithm. As in our approach, SPaRKy distinguishes local and global features. Local features take only information of the current tree node into account, including its parents, siblings and children, while global features take information of the entire utterance into account. While SPaRKy is shown to reach high output quality in comparison to a template-based baseline, the authors acknowledge that generation with SPaRKy is rather slow when applied in a real-time SDS. This could present a problem in incremental settings, where generation speed is of particular importance. The SPaRKy system is also used by Rieser et al. (2011), who focus on information presentation strategies for restaurant recommendations, summaries or comparisons within an SDS. Their surface realiser is informed by the highest ranked SPaRKy outputs for a particular information presentation strategy and will constitute one of our baselines in the evaluation. More work on trainable realisation for SDSs generally includes Bulyko and Ostendorf (2002) who use finite state transducers, Nakatsu and White (2006) who use supervised learning, Varges (2006) who uses chart generation, and Konstas and Lapata (2012) who use weighted hypergraphs, among others. 3 Cohesion across Utterances 3.1 Tree-based Semantic Representations The restaurant recommendations we generate can include any of the attributes shown in Table 1. It is then the task of the surface realiser to find the best realisation, including whether to present them in one or several sentences. This often is a sentence planning decision, but in our approach it is handled using CRF-based surface realisation. The semantic forms underlying surface realisation can be produced in many ways. In our case, they are produced by a reinforcement learning agent which orders semantic attributes in the tree ac1255 Timing and Ordering Surface Realisation User Interaction Micro-turn dialogue act, inform(food=Thai) Semantic tree String of words intervening modules speech semantics of user utterance (synthesised) Manager Figure 1: Architecture of our SDS with a focus on the NLG components. While the user is speaking, the dialogue manager sends dialogue acts to the NLG module, which uses reinforcement learning to order semantic attributes and produce a semantic tree (see Dethlefs et al. (2012b)). This paper focuses on surface realisation from these trees using a CRF as shown in the surface realisation module. Slot Example ADDRESS The venue’s address is . . . AREA It is located in . . . FOOD The restaurant serves . . . cuisine. NAME The restaurant’s name is . . . PHONE The venue’s phone number is . . . POSTCODE The postcode is . . . QUALITY This is a . . . venue. PRICE It is located in the . . . price range. SIGNATURE The venue specialises in . . . VENUE This venue is a . . . Table 1: Semantic slots required for our domain along with example realisations. Attributes can be combined in all possible ways during generation. cording to their confidence in the dialogue. This is because SDSs can often have uncertainties with regard to the user’s actual desired attribute values due to speech recognition inaccuracies. We therefore model all semantic slots as probability distributions, such as inform(food=Indian, 0.6) or inform(food=Italian, 0.4) and apply reinforcement learning to finding the optimal sequence for presentation. Please see Dethlefs et al. (2012b) for details. Here, we simply assume that a semantic form has been produced by a previous processing module. As shown in the architecture diagram in Figure 1, a CRF surface realiser takes a semantic tree as input. We represent these as context-free trees which can be defined formally as 4-tuples Lexical features Syntactic features Semantic features The Beluga is a great Italian restaurant y0 y1 y2 root inform( name= Beluga) The Beluga root inform( venue= Restaurant) is a great Italian inform( type= Italian) root restaurant (a) (b) The Beluga is a great Italian restaurant other phrases (c) Figure 2: (a) Graphical representation of a linearchain Conditional Random Field (CRF), where empty nodes correspond to the labelled sequence, shaded nodes to linguistic observations, and dark squares to feature functions between states and observations; (b) Example semantic trees that are updated at each time step in order to provide linguistic features to the CRF (only one possible surface realisation is shown and parse categories are omitted for brevity); (c) Finite state machine of phrases (labels) for this example. {S, T, N, H}, where S is a start symbol, typically the root node of the tree; T = {t0, t1, t2 . . . t|T|} is a set of terminal symbols, corresponding to single phrases; N = {n0, n1, n2 . . . n|N|} is a set of non-terminal symbols corresponding to semantic categories, and H = {h0, h1, h2 . . . h|H|} is a set of production rules of the form n →α, where n ∈N, α ∈T ∪N. The production rules represent alternatives at each branching node where the CRF is consulted for the best available expansion from the subset of possible ones. All nodes in the tree are annotated with a semantic concept (obtained from the semantic form) as well as their parse category. 3.2 Conditional Random Fields for Phrase-Based Surface Realisation The main idea of our approach is to treat surface realisation as a sequence labelling task in which a sequence of semantic inputs needs to be labelled with appropriate surface realisations. The task is therefore to find a mapping between (observed) 1256 lexical, syntactic and semantic features and a (hidden) best surface realisation. We use the linear-chain Conditional Random Field (CRF) model for statistical phrase-based surface realisation, see Figure 2 (a). This probabilistic model defines the posterior probability of labels (surface realisation phrases) y={y1, . . . , y|y|} given features x={x1, . . . , x|x|} (informed by a semantic tree, see Figure 2 (b)), as P(y|x) = 1 Z(x) T Y t=1 exp ( K X k=1 θkΦk(yt, yt−1, xt) ) , where Z(x) is a normalisation factor over all possible realisations (i.e. labellings) of x such that the sum of all terms is one. The parameters θk are weights corresponding to feature functions Φk(.), which are real values describing the label state y at time t based on the previous label state yt−1 and features xt. For example: from Figure 2 (c), Φk might have the value Φk = 1.0 for the transition from “The Beluga” to “is a great Italian”, and 0.0 elsewhere. The parameters θk are set to maximise the conditional likelihood of phrase sequences in the training data set. They are estimated using the gradient ascent algorithm. After training, labels can be predicted for new sequences of observations. The most likely phrase sequence is expressed as y ∗= arg max y P(y|x), which is computed using the Viterbi algorithm. We use the Mallet package1 (McCallum, 2002) for parameter learning and inference. 3.3 Feature Selection and Training The following features define the generation context used during training of the CRF. The generation context includes everything that has been generated for the current utterance so far. All features can be obtained from a semantic input tree. • Lexical items of parents and siblings, • Semantic types in expansion, • Semantic types of parents and siblings, • Parse category of expansion, • Parse categories of parents and siblings. We use the StanfordParser2 (Marneffe et al., 2006) to obtain the parse category for each tree node. 1http://mallet.cs.umass.edu/ 2http://nlp.stanford.edu/software/ lex-parser.shtml The semantics for each node are derived from the input dialogue acts (these are listed in Table 1) and are associated with nodes. The lexical items are present in the generation context and are mapped to semantic tree nodes. As an example, for generating an utterance (label sequence) such as The Beluga is a great restaurant. It is located in the city centre., each generation step needs to take the features of the entire generation history into account. This includes all individual lexical items generated, the semantic types used and the parse categories for each tree node involved. For the first constituent, The Beluga, this corresponds to the features {ˆ BEGIN NAME} indicating the beginning of a sentence (where empty features are omitted), the beginning of a new generation context and the next semantic slot required. For the second constituent, is a great restaurant, the features are {THE BELUGA NAME NP VENUE}, i.e. including the generation history (with lexical items and parse category added for the first constituent) and the semantics of the next required slot, VENUE. In this way, a sequence of surface form constituents is generated corresponding to latent states in the CRF. Since global utterance features capture the full generation context (i.e. beyond the current utterance), we are also able to model phenomena such as co-references and pronouns. This is useful for longer restaurant recommendations which may span over more than one utterance. If the generation history already contains a semantic attribute, e.g. the restaurant name, the CRF may afterwards choose a pronoun, e.g. it, which has a higher likelihood than using the proper name again. Similarly, the CRF may decide to realise a new attribute as constituents of different order, such as a sentence or PP, depending on the length, number and parse categories of previously generated output. In this way, our approach implicitly treats sentence planning decisions such as the distribution of content over a set of messages in the same way as (or as part of) surface realisation. A further capability of our surface realiser is that it can generate complete phrases from full as well as partial dialogue acts. This is useful in interactive contexts, where we need as much robustness as possible. A demonstration of this is given in Section 5 in an application to incremental surface realisation. To train the CRF, we used a data set of 552 restaurant recommendations from the website The 1257 List.3 The data contains recommendations such as Located in the city centre, Beluga is a stylish yet laid-back restaurant with a smart menu of modern European cuisine. 3.4 Grammar Induction The grammar g of surface realisation candidates is obtained through an automatic grammar induction algorithm which can be run on unlabelled data and requires only minimal human intervention. This grammar defines the surface realisation space for the CRFs. We provide the human corpus of restaurant recommendations from Section 3.3 as input to grammar induction. The algorithm is shown in Algorithm 1. It first identifies all semantic attributes of interest in an utterance, in our case those specified in Table 1, and replaces them by a variable. These attributes include food types, such as Mexican, Chinese, particular parts of town, prices, etc. About 45% of them can be identified based on heuristics. The remainder needs to be hand-annotated at the moment, which includes mainly attributes like restaurant names or quality attributes, such as delicate, exquisite, etc. Subsequently, all utterances are parsed using the Stanford parser to obtain constituents and are integrated into the grammar under construction. The non-terminal symbols are named after the automatically annotated semantic attributes contained in their expansion, e.g. NAME QUALITY →The $name$ is of $quality$ quality. In this way, each non-terminal symbol has a semantic representation and an associated parse category. In total, our induced grammar contains more than 800 rules. 4 Evaluation To evaluate our approach, we focus on a subjective human rating study which aims to determine whether CRF-based surface realisation that takes the full generation context into account, called CRF (global), is perceived better by human judges than one that uses a CRF but just takes local context into account, called CRF (local). While CRF (global) uses features from the entire generation history, CRF (local) uses only features from the current tree branch. We assume that cohesion can be identified by untrained judges as natural, well-phrased and non-repetitive surface forms. To examine differences in methodology between 3http://www.list.co.uk Algorithm 1 Grammar Induction. 1: function FINDGRAMMAR(utterances u, semantic attributes a) return grammar 2: for each utterance u do 3: if u contains a semantic attribute from a, such as venue, cuisine, etc. then 4: Find and replace the attribute by its semantic variable, e.g. $venue$. 5: end if 6: Parse the sentence and induce a set of rules α → β, where α is a semantic variable and β is its parse. 7: Traverse the parse tree in a top-down, depth-first search and 8: if expansion β exists then 9: continue 10: else if non-terminal α exists then 11: add new expansion β to α. 12: else write new rule α →β. 13: end if 14: Write grammar. 15: end for 16: end function CRFs and other state-of-the-art methods, we also compare our system to two other baselines: • CLASSiC corresponds to the system reported in Rieser et al. (2011),4 which generates restaurant recommendations based on the SPaRKy system (Walker et al., 2007), and has received high ratings in the past. SPaRKy uses global utterance features. • n-grams represents a simple 5-gram baseline that is similar to Oh and Rudnicky (2000)’s system. We will sample from the most likely slot realisations that do not contain a repetition and include exactly the required slot values. Local context only is taken into account. 4.1 Human Rating Study We carried out a user rating study on the CrowdFlower crowd sourcing platform.5 Each participant was shown part of a real human-system dialogue that emerged as part of the CLASSiC project evaluation (Rieser et al., 2011). All dialogues and data are freely available from http://www. classic-project.org. Each dialogue contained two variations for one of the utterances. These variations were generated from two out of the four systems described above. The order that these were presented to the participant was counterbalanced. Table 2 gives an example of a dialogue segment presented to the participants. 4In Rieser et al. (2011), this system is referred to as the TIP system, which generates summaries, comparisons or recommendations for restaurants. For the present study, we com1258 SYS Thank you for calling the Cambridge Information system. Your call will be recorded for research purposes. You may ask for information about a place to eat, such as a restaurant, a pub, or a cafe. How may I help you? USR I want to find an American restaurant which is in the very expensive area. SYS A The restaurant Gourmet Burger is an outstanding, expensive restaurant located in the central area. SYS B Gourmet Burger is a smart and welcoming restaurant. Gourmet Burger provides an expensive dining experience with great food and friendly service. If you’re looking for a central meal at an expensive price. USR What is the address and phone number? SYS Gourmet Burger is on Regent Street and its phone number is 01223 312598. USR Thank you. Good bye. Table 2: Example dialogue for participants to compare alternative outputs in italics, USR=user, SYS A=CRF (global), SYS B=CRF(local). System Natural Phrasing Repetit. CRF global 3.65 3.64 3.65 CRF local 3.10∗ 3.19∗ 3.13∗ CLASSiC 3.53∗ 3.59 3.48∗ n-grams 3.01∗ 3.09∗ 3.32∗ Table 3: Subjective user ratings. Significance with CRF (global) at p<0.05 is indicated as ∗. 44 participants gave a total of 1,830 ratings of utterances produced across the four systems. Fluent speakers of English only were requested and the participants were from the United States. They were asked to rate each utterance on a 5 point Likert scale in response to the following questions (where 5 corresponds to totally agree and 1 corresponds to totally disagree): • The utterance was natural, i.e. it could have been produced by a human. (Natural) • The utterance was phrased well. (Phrasing) • The utterance was repetitive. (Repetitive) 4.2 Results We can see from Table 3 that across all the categories, the CRF (global) gets the highest overall ratings. This difference is significant for all categories compared with CRF (local) and n-grams (using a 1-sided Mann Whitney U-test, p < 0.001). pare only with the subset of recommendations. 5http://www.crowdflower.com Possibly this is because the local context taken into account by both systems was not enough to ensure cohesion across surface phrases. It is not possible, e.g., to cover co-references within a local context only or discourse markers that refer beyond the current utterance. This can lead to short and repetitive phrases, such as Make your way to Gourmet Burger. The food quality is outstanding. The prices are expensive. generated by the n-gram baseline. The CLASSiC baseline, based on SPaRKy, was the most competitive system in our comparison. None-the-less CRF (global) is rated higher across categories and significantly so for Natural (p < 0.05) and Repetitive (p < 0.005). For Phrasing, there is a trend but not a significant difference (p < 0.16). All comparisons are based on a 1-sided Mann Whitney U-test. A qualitative comparison between the CRF (global) and CLASSiC outputs showed the following. CLASSiC utterances tend to be longer and contain more sentences than CRF (global) utterances. While CRF (global) often decides to aggregate attributes into one sentence, such as the Beluga is an outstanding restaurant in the city centre, CLASSiC tends to rely more on individual messages, such as The Beluga is an outstanding restaurant. It is located in the city centre. A possible reason is that while CRF (global) is able to take features beyond an utterance into account, CLASSiC/SPaRKy is restricted to global features of the current utterance. We can further compare our results with Rieser et al. (2011) and Mairesse et al. (2010) who also generate restaurant recommendations and asked similar questions to participants as we did. Rieser et al. (2011)’s system received an average rating of 3.586 in terms of Phrasing which compares to our 3.64. This difference is not significant, and in line with the user ratings we observed for the CLASSiC system above (3.59). Mairesse et al. (2010) achieved an average score of 4.05 in terms of Natural in comparison to our 3.65. This difference is significant at p<0.05. Possibly their better performance is due to the data set being more “in domain” than ours. They collected data from humans that was written specifically for the task that the system was tested on. In contrast, our system was trained on freely available data that was written by professional restaurant reviewers. Unfortunately, we cannot compare across other categories, 6This was rescaled from a 1-6 scale. 1259 USR1 I’m looking for a nice restaurant in the centre. SYS1 inform(area=centre [0.2], food=Thai [0.3]) inform(name=Bangkok [0.3]) So you’re looking for a Thai . . . USR2 [barges in] No, I’m looking for a restaurant with good quality food. SYS2 inform(quality=good [0.6], name=Beluga [0.6]) Oh sorry, so a nice restaurant located . . . USR3 [barges in] . . . in the city centre. SYS3 inform(area=centre [0.8]) Table 4: Example dialogue where the dialogue manager needs to send incremental updates to the NLG. Incremental surface realisation from semantic trees for this dialogue is shown in Figure 3. because the authors tested only for Phrasing and Natural, respectively. 5 Incremental Surface Realisation Recent years have seen increased interest in incremental dialogue processing (Skantze and Schlangen, 2009; Schlangen and Skantze, 2009). The main characteristic of incremental architectures is that instead of waiting for the end of a user turn, they begin to process the input stream as soon as possible, updating their processing hypotheses as more information becomes available. From a dialogue perspective, they can be said to work on partial rather than full dialogue acts. With respect to surface realisation, incremental NLG systems have predominantly relied on pre-defined templates (Purver and Otsuka, 2003; Skantze and Hjalmarsson, 2010; Dethlefs et al., 2012a), which limits the flexibility and quality of output generation. Buschmeier et al. (2012) have presented a system which systematically takes the user’s acoustic understanding problems into account by pausing, repeating or re-phrasing if necessary. Their approach is based on SPUD (Stone et al., 2003), a constraint satisfaction-based NLG architecture and marks important progress towards more flexible incremental surface realisation. However, given the human labour involved in constraint specification, cohesion is often limited to a local context. Especially for long utterances or such that are separated by user turns, this may lead to surface form increments that are not well connected and lack cohesion. 5.1 Application to Incremental SR This section will discuss a proof-of-concept application of our approach to incremental surface realisation. Table 4 shows an example dialogue between a user and system that contains a number of incremental phenomena that require hypothesis updates, system corrections and user bargeins. Incremental surface realisation for this dialogue is shown in Figure 3, where processing steps are indicated as bold-face numbers and are triggered by partial dialogue acts that are sent from the dialogue manager, such as inform(area=centre [0.2]). The numbers in square brackets indicate the system’s confidence in the attribute-value pair. Once a dialogue act is observed by the NLG system, a reinforcement learning agent determines the order of attributes and produces a semantic tree, as described in Section 3.1. Since the semantic forms are constructed incrementally, new tree nodes can be attached to and deleted from an existing tree, depending on what kind of update is required. In the dialogue in Table 4, the user first asks for a nice restaurant in the centre. The dialogue manager constructs a first attribute-value slot, inform(area=centre [0.2], . . . ), and passes it on to NLG.7 In Figure 3, we can observe the corresponding NLG action, a first tree is created with just a root node and a node representing the area slot (step 1). In a second step, the semantically annotated node gets expanded into a surface form that is chosen from a set of candidates (shown in curly brackets). The CRF is responsible for this last step. Since there is no preceding utterance, the best surface form is chosen based on the semantics alone. Active tree nodes, i.e. those currently under generation, are indicated as asterisks in Figure 3. Currently inactive nodes are shown as circles. Step 3 then further expands the current tree adding a node for the food type and the name of a restaurant that the dialogue manager had passed. We see here that attributes can either be primitive or complex. Primitive attributes contain a single semantic type, such as area, whereas complex attributes contain multiple types, such as food, name and need to be decomposed in a later processing step (see steps 4 and 6). Step 5 again uses the CRF 7Note here that the information passed on to the NLG is distinct from the dialogue manager’s own actions. In the example, the NLG is asked to generate a recommendation, but the dialogue manager actually decides to clarify the user’s preferences due to low confidence. This scenario is an example of generator inputs that may get revised afterwards. 1260 root (1) inform (area=centre) (2) Right in the city centre, {located in $area$, if you're looking to eat in $area$, in $area$, ...} inform(area= centre) (3) inform(food=Thai name=Bangkok) Right in the city centre, root (6) inform (food=Thai) (4) inform(name= Bangkok) (5) Bangkok {the $name$, it is called $name$, ...} root inform(area= centre) Right in the city centre, inform(food=Thai, name=Bangkok) root inform(area= centre) Right in the city centre, (7) inform(quality=very good, name=Beluga) inform(name= Bangkok) inform (food=Thai) Bangkok root inform(area= centre) inform(quality=nice, name=Beluga) Right in the city centre, (8) inform(name= Beluga) (10) inform(quality= very good) (9) the Beluga {$name$, the venue called $name$, ...} (11) is of very good quality. {is a $quality$ venue, if you want $quality$ food, $quality$, a $quality$ place ...} * * * * * * * * * * * * * Figure 3: Example of incremental surface realisation, where each generation step is indicated by a number. Active generation nodes are shown as asterisks and deletions are shown as crossed out. Lexical and semantic features are associated with their respective nodes. Syntactic information in the form of parse categories are also taken into account for surface realisation, but have been omitted in this figure. to obtain the next surface realisation that connects with the previous one (so that a sequence of realisation “labels” appears: Right in the city centre and Bangkok). It takes the full generation context into account to ensure a globally optimal choice. This is important, because the local context would otherwise be restricted to a partial dialogue act, which can be much smaller than a full dialogue act and thus lead to short, repetitive sentences. The dialogue continues as the system implicitly confirms the user’s preferred restaurant (SYS1). At this point, we encounter a user barge-in correcting the desired choice. As a consequence, the dialogue manager needs to update its initial hypotheses and communicate this to NLG. Here, the last three tree nodes need to be deleted from the tree because the information is no longer valid. This update and the deletion is shown in step 7. Afterwards, the dialogue continues and NLG involves mainly expanding the current tree into a full sequence of surface realisations for partial dialogue acts which come together into a full utterance. This example illustrates three incremental processing steps: expansions, updates and deletions. Expansions are the most frequent operation. They add new partial dialogue acts to the semantic tree. They also consult the CRF for the best surface realisation. Since CRFs are not restricted by the Markov condition, they are less constrained by local context than other models and can take nonlocal dependencies into account. For our application, the maximal context is 9 semantic attributes (for a surface form that uses all possible 10 attributes). While their extended context awareness can often make CRFs slow to train, they are fast at execution and therefore very applicable to the incremental scenario. For applications involving longer-spanning alternatives, such as texts or paragraphs, the context of the CRF would likely have to be constrained. Updates are triggered by the hypothesis updates of the dialogue manager. Whenever a new attribute comes in, it is checked against the generator’s existing knowledge. If it is inconsistent with previous knowledge, an update is triggered and often followed by a deletion. Whenever generated output needs to be modified, old expansions and surface forms are deleted first, before new ones can be expanded in their place. 5.2 Updates and Processing Speed Results Since fast responses are crucial in incremental systems, we measured the average time our system took for a surface realisation. The time is 100ms on a MacBook Intel Core 2.6 Duo with 8GB in 1261 RAM. This is slightly better than other incremental systems (Skantze and Schlangen, 2009) and much faster than state-of-the-art non-incremental systems such as SPaRKy (Walker et al., 2007). In addition, we measured the number of necessary generation updates in comparison to a nonincremental setting. Since updates take effect directly on partial dialogue acts, rather than the full generated utterance, we require around 50% less updates as if generating from scratch for every changed input hypothesis. A qualitative analysis of the generated outputs showed that the quality is comparable to the non-incremental case. 6 Conclusion and Future Directions We have presented a novel technique for surface realisation that treats generation as a sequence labelling task by combining a CRF with tree-based semantic representations. An essential property of interactive surface realisers is to keep track of the utterance context including dependencies between linguistic features to generate cohesive utterances. We have argued that CRFs are well suited for this task because they are not restricted by independence assumptions. In a human rating study, we confirmed that judges rated our output as better phrased, more natural and less repetitive than systems that just take local features into account. This also holds for a comparison with stateof-the-art rank and boost or n-gram approaches. Keeping track of the global context is also important for incremental systems since generator inputs can be incomplete or subject to modification. In a proof-of-concept study, we have argued that our approach is applicable to incremental surface realisation. This was supported by preliminary results on the speed, number of updates and quality during generation. As future work, we plan to test our model in a task-based setting using an end-toend SDS in an incremental and non-incremental setting. This study will contain additional evaluation categories, such as the understandability or informativeness of system utterances. In addition, we may compare different sequence labelling algorithms for surface realisation (Nguyen and Guo, 2007) or segmented CRFs (Sarawagi and Cohen, 2005) and apply our method to more complex surface realisation domains such as text generation or summarisation. Finally, we would like to explore methods for unsupervised data labelling so as to facilitate portability across domains further. Acknowledgements The research leading to this work was funded by the EC FP7 programme FP7/2011-14 under grant agreement no. 287615 (PARLANCE). References Ivan Bulyko and Mari Ostendorf. 2002. Efficient integrated response generation from multiple targets using weighted finite state transducers. Computer Speech and Language, 16:533–550. Hendrik Buschmeier, Timo Baumann, Benjamin Dosch, Stefan Kopp, and David Schlangen. 2012. Incremental Language Generation and Incremental Speech Synthesis. In Proceedings of the 13th Annual SigDial Meeting on Discourse and Dialogue (SIGdial), Seoul, South Korea. Nina Dethlefs and Heriberto Cuay´ahuitl. 2011a. Combining Hierarchical Reinforcement Learning and Bayesian Networks for Natural Language Generation in Situated Dialogue. In Proceedings of the 13th European Workshop on Natural Language Generation (ENLG), Nancy, France. Nina Dethlefs and Heriberto Cuay´ahuitl. 2011b. Hierarchical Reinforcement Learning and Hidden Markov Models for Task-Oriented Natural Language Generation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACLHLT), Portland, Oregon, USA. Nina Dethlefs, Helen Hastie, Verena Rieser, and Oliver Lemon. 2012a. Optimising Incremental Dialogue Decisions Using Information Density for Interactive Systems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-CoNLL), Jeju, South Korea. Nina Dethlefs, Helen Hastie, Verena Rieser, and Oliver Lemon. 2012b. Optimising Incremental Generation for Spoken Dialogue Systems: Reducing the Need for Fillers. In Proceedings of the International Conference on Natural Language Generation (INLG), Chicago, Illinois, USA. Kallirroi Georgila, Nikos Fakotakis, and George Kokkinakis. 2002. Stochastic Language Modelling for Recognition and Generation in Dialogue Systems. TAL (Traitement automatique des langues) Journal, 43(3):129–154. Ioannis Konstas and Mirella Lapata. 2012. Conceptto-text Generation via Discriminative Reranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 369– 378, Jeju Island, Korea. John D. Lafferty, Andrew McCallum, and Fernando C.N. Pereira. 2001. Conditional Random 1262 Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML), pages 282–289. Wei Lu, Hwee Tou Ng, and Wee Sun Lee. 2009. Natural Language Generation with Tree Conditional Random Fields. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), Singapore. Franc¸ois Mairesse, Filip Jurˇc´ıˇcek, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-Based Statistical Language Generation Using Graphical Models and Active Learning. In Proceedings of the 48th Annual Meeting of the Association of Computational Linguistics (ACL), Uppsala, Sweden. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC), Genoa, Italy. Andrew McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. Crystal Nakatsu and Michael White. 2006. Learning to Say It Well: Reranking Realizations by Predicted Synthesis Quality. In In Proceedings of the Annual Meeting of the Association for Computational Linguistics (COLING-ACL) 2006, pages 1113–1120, Sydney, Australia. Nam Nguyen and Yunsong Guo. 2007. Comparisons of Sequence Labeling Algorithms and Extensions. In Proceedings of the International Conference on Machine Learning (ICML), Corvallis, OR, USA. Alice Oh and Alexander Rudnicky. 2000. Stochastic Language Generation for Spoken Dialogue Systems. In Proceedings of the ANLP/NAACL Workshop on Conversational Systems, pages 27–32, Seattle, Washington, USA. Matthew Purver and Masayuki Otsuka. 2003. Incremental Generation by Incremental Parsing. In In Proceedings of the 6th UK Special-Interesting Group for Computational Linguistics (CLUK) Colloquium. Verena Rieser, Simon Keizer, Xingkun Liu, and Oliver Lemon. 2011. Adaptive Information Presentation for Spoken Dialogue Systems: Evaluation with Human Subjects. In Proceedings of the 13th European Workshop on Natural Language Generation (ENLG), Nancy, France. Sunita Sarawagi and William Cohen. 2005. SemiMarkov Conditional Random Fields for Information Extraction. Advances in Neural Information Processing. David Schlangen and Gabriel Skantze. 2009. A General, Abstract Model of Incremental Dialogue Processing. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, Athens, Greece. Gabriel Skantze and Anna Hjalmarsson. 2010. Towards Incremental Speech Generation in Dialogue Systems. In Proceedings of the 11th Annual SigDial Meeting on Discourse and Dialogue, Tokyo, Japan. Gabriel Skantze and David Schlangen. 2009. Incremental Dialogue Processing in a Micro-Domain. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, Athens, Greece. Amanda Stent, Marilyn Walker, Steve Whittaker, and Preetam Maloor. 2002. User-tailored Generation for Spoken Dialogue: An Experiment. In Proceedings of the International Conference on Spoken Language Processing. Matthew Stone, Christine Doran, Bonnie Webber, Tonia Bleam, and Martha Palmer. 2003. Microplanning with Communicative Intentions: The SPUD System. Computational Intelligence, 19:311–381. Charles Sutton and Andrew McCallum. 2006. Introduction to Conditional Random Fields for Relational Learning. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. Sebastian Varges. 2006. Overgeneration and Ranking for Spoken Dialogue Systems. In Proceedings of the Fourth International Natural Language Generation Conference (INLG), Sydney, Australia. Marilyn Walker, Amanda Stent, Franc¸ois Mairesse, and Rashmi Prasad. 2007. Individual and Domain Adaptation in Sentence Planning for Dialogue. Journal of Artificial Intelligence Research, 30(1):413–456. 1263
2013
123
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1264–1274, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Two-Neighbor Orientation Model with Cross-Boundary Global Contexts Hendra Setiawan, Bowen Zhou, Bing Xiang and Libin Shen IBM T.J.Watson Research Center 1101 Kitchawan Road Yorktown Heights, NY 10598, USA {hendras,zhou,bxiang,lshen}@us.ibm.com Abstract Long distance reordering remains one of the greatest challenges in statistical machine translation research as the key contextual information may well be beyond the confine of translation units. In this paper, we propose Two-Neighbor Orientation (TNO) model that jointly models the orientation decisions between anchors and two neighboring multi-unit chunks which may cross phrase or rule boundaries. We explicitly model the longest span of such chunks, referred to as Maximal Orientation Span, to serve as a global parameter that constrains underlying local decisions. We integrate our proposed model into a state-of-the-art string-to-dependency translation system and demonstrate the efficacy of our proposal in a large-scale Chinese-to-English translation task. On NIST MT08 set, our most advanced model brings around +2.0 BLEU and -1.0 TER improvement. 1 Introduction Long distance reordering remains one of the greatest challenges in Statistical Machine Translation (SMT) research. The challenge stems from the fact that an accurate reordering hinges upon the model’s ability to make many local and global reordering decisions accurately. Often, such reordering decisions require contexts that span across multiple translation units.1 Unfortunately, previous approaches fall short in capturing such cross-unit contextual information that could be 1We define translation units as phrases in phrase-based SMT, and as translation rules in syntax-based SMT. critical in reordering. Specifically, the popular distortion or lexicalized reordering models in phrasebased SMT focus only on making good local prediction (i.e. predicting the orientation of immediate neighboring translation units), while translation rules in syntax-based SMT come with a strong context-free assumption, which model only the reordering within the confine of the rules. In this paper, we argue that reordering modeling would greatly benefit from richer cross-boundary contextual information We introduce a reordering model that incorporates such contextual information, named the TwoNeighbor Orientation (TNO) model. We first identify anchors as regions in the source sentences around which ambiguous reordering patterns frequently occur and chunks as regions that are consistent with word alignment which may span multiple translation units at decoding time. Most notably, anchors and chunks in our model may not necessarily respect the boundaries of translation units. Then, we jointly model the orientations of chunks that immediately precede and follow the anchors (hence, the name “two-neighbor”) along with the maximal span of these chunks, to which we refer as Maximal Orientation Span (MOS). As we will elaborate further in next sections, our models provide a stronger mechanism to make more accurate global reordering decisions for the following reasons. First of all, we consider the orientation decisions on both sides of the anchors simultaneously, in contrast to existing works that only consider one-sided decisions. In this way, we hope to upgrade the unigram formulation of existing reordering models to a higher order formulation. Second of all, we capture the reordering of chunks that may cross translation units and may be composed of multiple units, in contrast to ex1264 isting works that focus on the reordering between individual translation units. In effect, MOS acts as a global reordering parameter that guides or constrains the underlying local reordering decisions. To show the effectiveness of our model, we integrate our TNO model into a state-of-theart syntax-based SMT system, which uses synchronous context-free grammar (SCFG) rules to jointly model reordering and lexical translation. The introduction of nonterminals in the SCFG rules provides some degree of generalization. However as mentioned earlier, the context-free assumption ingrained in the syntax-based formalism often limits the model’s ability to influence global reordering decision that involves cross-boundary contexts. In integrating TNO, we hope to strengthen syntax-based system’s ability to make more accurate global reordering decisions. Our other contribution in this paper is a practical method for integrating the TNO model into syntax-based translations. The integration is nontrivial since the decoding of syntax-based SMT proceeds in a bottom-up fashion, while our model is more natural for top-down parsing, thus the model’s full context sometimes is often available only at the latest stage of decoding. We implement an efficient shift-reduce algorithm that facilitates the accumulation of partial context in a bottom-up fashion, allowing our model to influence the translation process even in the absence of full context. We show the efficacy of our proposal in a largescale Chinese-to-English translation task where the introduction of our TNO model provides a significant gain over a state-of-the-art string-todependency SMT system (Shen et al., 2008) that we enhance with additional state-of-the-art features. Even though the experimental results carried out in this paper employ SCFG-based SMT systems, we would like to point out that our models is applicable to other systems including phrasebased SMT systems. The rest of the paper is organized as follows. In Section 2, we introduce the formulation of our TNO model. In Section 3, we introduce and motivate the concept of Maximal Orientation Span. In Section 4, we introduce four variants of the TNO model with different model complexities. In Section 5, we describe the training procedure to estimate the parameters of our models. In Section 6, we describe our shift-reduce algorithm which integrates our proposed TNO model into syntax-based SMT. In Section 7, we describe our experiments and present our results. We wrap up with related work in Section 8 and conclusion in Section 9. 2 Two-Neighbor Orientation Model Given an aligned sentence pair Θ = (F, E, ∼), let ∆(Θ) be all possible chunks that can be extracted from Θ according to: 2 {(fj2 j1/ei2 i1):∀j1≤j≤j2, ∃i : (j, i)∈∼, ii≤i≤i2 ∧ ∀i1≤i≤i2, ∃j : (j, i)∈∼, ji≤j≤j2} Our Two-Neighbor Orientation model (TNO) designates A ⊂∆(Θ) as anchors and jointly models the orientation of chunks that appear immediately to the left and to the right of the anchors as well as the identities of these chunks. We define anchors as chunks, around which ambiguous reordering patterns frequently occur. Anchors can be learnt automatically from the training data or identified from the linguistic analysis of the source sentence. In our experiments, we use a simple heuristics based on part-of-speech tags which will be described in Section 7. More concretely, given A ⊂∆(Θ), let a = (fj2 j1/ei2 i1) ∈A be a particular anchor. Then, let CL(a) ⊂∆(Θ) be a’s left neighbors and let CR(a) ⊂∆(Θ) be a’s right neighbors, iff: ∀CL = (fj4 j3/ei4 i3) ∈CL(a) : j4 + 1 = j1 (1) ∀CR = (fj6 j5/ei6 i5) ∈CR(a) : j2 + 1 = j5 (2) Given CL(a) and CR(a), let CL = (fj4 j3/ei4 i3) and CR = (fj6 j5/ei6 i5) be a particular pair of left and right neighbors of a = (fj2 j1/ei2 i1). Then, the orientation of CL and CR are OL(CL, a) and OR(CR, a) respectively and each may take one of the following four orientation values (similar to (Nagata et al., 2006)): • Monotone Adjacent (MA), if (i4 + 1) = i1 for OL and if (i2 + 1) = i5 for OR • Reverse Adjacent (RA), if (i2 + 1) = i3 for OL and if (i6 + 1) = i1 for OR • Monotone Gap (MG), if (i4 + 1) < i1 for OL and if (i2 + 1) < i5 for OR 2We represent a chunk as a source and target phrase pair (f j2 j1/ei2 i1) where the subscript and the superscript indicate the starting and the ending indices as such f j2 j1 denotes a source phrase that spans from j1 to j2. 1265 Figure 1: An aligned Chinese-English sentence pair. Circles represent alignment points. Black circle represents the anchor; boxes represent the anchor’s neighbors. • Reverse Gap (RG), if (i2 + 1) < i3 for OL and if (i6 + 1) < i1 for OR. (1) The first clause (monotone, reverse) indicates whether the target order of the chunks follows the source order; the second (adjacent, gap) indicates whether the chunks are adjacent or separated by an intervening phrase when projected. To be more concrete, let us consider an aligned sentence pair in Fig. 1, which is adapted from (Chiang, 2005). Suppose there is only one anchor, i.e. a = (f7 7 /e7 7) which corresponds to the word de(that). By applying Eqs. 1 and 2, we can infer that a has three left neighbors and four right neighbors, i.e. CL(a) = (f6 6 /e9 9), (f6 5 /e9 8), (f6 3 /e11 8 ) and CR(a) = (f8 8 /e5 5), (f9 8 /e6 5), (f10 8 /e6 4), (f11 8 /e6 3) respectively. Then, by applying Eq. 1, we can compute the orientation values of each of these neighbors, which are OL(CL(a), a) = RG, RA, RA and OR(CR(a), a) = RG, RA, RA, RA. As shown, most of the neighbors have Reverse Adjacent (RA) orientation except for the smallest left and right neighbors (i.e. (f6 6 /e9 9) and (f8 8 /e5 5)) which have Reverse Gap (RG) orientation. Given the anchors together with its neighboring chunks and their orientations, the Two-Neighbor Orientation model takes the following form: Y a∈A X CL∈CL(a), CR∈CR(a) PTNO(CL, OL, CR, OR|a; Θ) (2) For conciseness, references that are clear from context, such the reference to CL and a in OL(CL, a), are dropped. 3 Maximal Orientation Span As shown in Eq. 2, the TNO model has to enumerate all possible pairing of CL ∈CL(a) and CR ∈CR(a). To make the TNO model more tractable, we simplify the TNO model to consider only the largest left and right neighbors, referred to as the Maximal Orientation Span/MOS (M). More formally, given a = (fj2 j1/ei2 i1), the left and the right MOS of a are: ML(a) = arg max (fj4 j3 /ei4 i3)∈CL(a) (j4 −j3) MR(a) = arg max (fj6 j5 /ei6 i5)∈CR(a) (j6 −j5) Coming back to our example, the left and right MOS of the anchor are ML(a) = (f6 3 /e11 8 ) and MR(a) = (f11 8 /e6 3). In Fig. 1, they are denoted as the largest boxes delineated by solid lines. As such, we reformulate Eq. 2 into: Y a∈A X CL∈CL(a), CR∈CR(a) PT NO(ML, OL, MR, OR|a; Θ).δ CL==ML∧ CR==MR (3) where δ returns 1 if (CL == ML ∧CR == MR), otherwise 0. Beyond simplifying the computation, the key benefit of modeling MOS is that it serves as a global parameter that can guide or constrain underlying local reorderings. As a case in point, let us consider a cheating exercise where we have to translate the Chinese sentence in Fig. 1 with the following set of hierarchical phrases3: Xa→⟨Aozhou1shi2 X1, Australia1 is2X1⟩ Xb→⟨yu3 Beihan4 X1, X1with3 North4 Korea⟩ Xc→⟨you5bangjiao6, have5dipl.6 rels.⟩ Xd→⟨X1de7shaoshu8 guojia9 zhi10 yi11, one11of10the few8 countries9 that7X1⟩ This set of hierarchical phrases represents a translation model that has resolved all local ambiguities (i.e. local reordering and lexical mappings) except for the spans of the hierarchical phrases. With this example, we want to show that accurate local decisions (rather obviously) don’t always lead to accurate global reordering and to demonstrate that explicit MOS modeling can play a crucial role to address this issue. To do so, we will again focus on the same anchor de (that). 3We use hierarchical phrase-based translation system as a case in point, but the merit is generalizable to other systems. 1266 d⇒ ⟨X1de7shaoshu8 guojia9 zhi10 yi11⟩, ⟨one11of10the few8 countries9 that7X1⟩ a⇒ ⟨⟨Aozhou1shi2 X1⟩de7shaoshu8 guojia9 zhi10 yi11⟩, ⟨one11of10the few8 countries9 that7⟨Australia1 is2X1⟩⟩ b⇒ ⟨⟨Aozhou1shi2 ⟨yu3 Beihan4 X1⟩⟩de7shaoshu8 guojia9 zhi10 yi11⟩, ⟨one11of10the few8 countries9 that7⟨Australia1 is2⟨X1with3 North4 Korea⟩⟩⟩ c⇒ ⟨d ⟨aAozhou1shi2 ⟨byu3 Beihan4 ⟨cyou5bangjiao6⟩c⟩b⟩ade7shaoshu8 guojia9 zhi10 yi11 ⟩d , ⟨one11of10the few8 countries9 that7⟨Australia1 is2⟨⟨have5dipl.6 rels.⟩with3 North4 Korea⟩⟩⟩ Table 1: Derivation of Xd ≺Xa ≺Xb ≺Xc that leads to an incorrect translation. a⇒ ⟨Aozhou1shi2X1⟩, ⟨Australia1 is2X1⟩ b⇒ ⟨Aozhou1shi2⟨yu3Beihan4X1⟩⟩, ⟨Australia1 is2⟨X1with3 North4 Korea⟩⟩ d⇒ ⟨Aozhou1shi2⟨yu3Beihan4⟨X1de7shaoshu8 guojia9 zhi10 yi11⟩⟩⟩, ⟨Australia1 is2⟨⟨one11of10the few8 countries9 that7X1⟩with3 North4 Korea⟩⟩ c⇒ ⟨aAozhou1shi2⟨byu3Beihan4 ⟨d ⟨cyou5bangjiao6⟩cde7shaoshu8 guojia9 zhi10 yi11 ⟩d ⟩b⟩a, Australia1 is2⟨⟨one11of10the few8 countries9 that7⟨have5dipl.6⟩⟩with3 North4 Korea⟩⟩ Table 2: Derivation of Xa ≺Xb ≺Xd ≺Xc that leads to the correct translation. As the rule’s identifier, we attach an alphabet letter to each rule’s left hand side, as such the anchor de (that) appears in rule Xd. We also attach the word indices as the superscript of the source words and project the indices to the target words aligned, as such “have5” suggests that the word “have” is aligned to the 5-th source word, i.e. you. Note that to facilitate the projection, the rules must come with internal word alignment in practice. Now the indices on the target words in the rules are different from those in Fig. 1. We will also extensively use indices in this sense in the subsequent section about decoding. In such a sense, ML(a) = (f6 3 /e6 3) and MR(a) = (f11 8 /e11 8 ). Given the rule set, there are three possible derivations, i.e. Xd ≺Xa ≺Xb ≺Xc, Xa ≺Xb ≺ Xd ≺Xc, and Xa ≺Xd ≺Xb ≺Xc, where ≺indicates that the first operand dominates the second operand in the derivation tree. The application of the rules would show that the first derivation will produce an incorrect reordering while the last two will produce the correct ones. Here, we would like to point out that even in this simple example where all local decisions are made accurate, this ambiguity occurs and it would occur even more so in the real translation task where local decisions may be highly inaccurate. Next, we will show that the MOS-related information can help to resolve this ambiguity, by focusing more closely on the first and the second derivations, which are detailed in Tables 1 and 2. Particularly, we want to show that the MOS generated by the incorrect derivation does not match the MOS learnt from Fig. 1. As shown, at the end of the derivation, we have all the information needed to compute the MOS (i.e. Θ) which is equivalent to that available at training time, i.e. the source sentence, the complete translation and the word alignment. Running the same MOS extraction procedure on both derivations would produce the right MOS that agrees with the right MOS previously learnt from Fig. 1, i.e. (f11 8 /e11 8 ). However, that’s not the case for left MOS, which we underline in Tables 1 and 2. As shown, the incorrect derivation produces a left MOS that spans six words, i.e. (f6 1 /e6 1), while the correct derivation produces a left MOS that spans four words, i.e. (f6 3 /e6 3). Clearly, the MOS of the incorrect derivation doesn’t agree with the MOS we learnt from Fig. 1, unlike the MOS of the correct translation. This suggests that explicit MOS modeling would provide a mechanism for resolving crucial global reordering ambiguities that are beyond the ability of local models. Additionally, this illustration also shows a case where MOS acts as a cross-boundary context which effectively relaxes the context-free assumption of hierarchical phrase-based formalism. In Tables 1 and 2’s full derivations, we indicate rule boundaries explicitly by indexing the angle brackets, e.g. ⟨a indicates the beginning of rule Xa in the derivation. As the anchor appears in Xd, we 1267 highlight its boundaries in box frames. de (that)’s MOS respects rule boundaries if and only if all the words come entirely from Xd’s antecedent or ⟨d and ⟩d appears outside of MOS; otherwise it crosses the rule boundaries. As clearly shown in Table 2, the left MOS of the correct derivation (underlined) crosses the rule boundary (of Xd) since ⟨d appears within the MOS. Going back to the formulation, focusing on modeling MOS would simplify the formulation of TNO model from Eq. 2 into: Y a∈A PTNO(ML, OL, MR, OR|a; Θ) (4) which doesn’t require enumerating of all possible pairs of CL and CR. 4 Model Decomposition and Variants To make the model more tractable, we decompose PTNO in Eq. 4 into the following four factors: P(MR|a) × P(OR|MR, a) × P(ML|OR, MR, a) × P(OL|ML, OR, MR, a). Subsequently, we will refer to them as PMR, POR, PML and POL respectively. Each of these factors will act as an additional feature in the log-linear framework of our SMT system. The above decomposition follows a generative story that starts from generating the right neighbor first. There are other equally credible alternatives, but based on empirical results, we settle with the above. Next, we present four different variants of the model (not to be confused with the four factors above). Each variant has a different probabilistic conditioning of the factors. We start by making strong independence assumptions in Model 1 and then relax them as we progress to Model 4. The description of the models is as follow: • Model 1. We assume PML and PMR to be equal to 1 and POR ≈P(OR|a; Θ) to be independent of MR and POL ≈P(OL|a; Θ) to be in independent of ML, MR and OR. • Model 2. On top of Model 1, we make POL dependent on POR, thus POL≈P(OL|OR, a; Θ). • Model 3. On top of Model 2, we make POR dependent on MR and POL on MR and ML, thus POR ≈P(OR|MR, a; Θ) and POL ≈ P(OL|ML, OR, MR; a, Θ) . • Model 4. On top of Model 3, we model PMR and PML as multinomial distributions estimated from training data. Model 1 represents a model that focuses on making accurate one-sided decisions, independent of the decision on the other side. Model 2 is designed to address the deficiency of Model 1 since Model 1 may assign non-zero probability to improbable assignment of orientation values, e.g. Monotone Adjacent for the left neighbor and Reverse Adjacent for the right neighbor. Model 2 does so by conditioning POL on OR. In Model 3, we start incorporating MOS-related information in predicting OL and OR. In Model 4, we explicitly model the MOS of each anchor. 5 Training The TNO model training consists of two different training regimes: 1) discriminative for training POL,POR; and 2) generative for training PML, PMR. Before describing the specifics, we start by describing the procedure to extract anchors and their corresponding MOS from training data, from which we collect statistics and extract features to train the model. For each aligned sentence pair (F, E, ∼) in the training data, the training starts with the identification of the regions in the source sentences as anchors (A). For our Chinese-English experiments, we use a simple heuristic that equates as anchors, single-word chunks whose corresponding word class belongs to closed-word classes, bearing a close resemblance to (Setiawan et al., 2007). In total, we consider 21 part-of-speech tags; some of which are as follow: VC (copula), DEG, DEG, DER, DEV (de-related), PU (punctuation), AD (adjectives) and P (prepositions). Next we generate all possible chunks ∆(Θ) as previously described in Sec. 3. We then define a function MinC(∆, j1, j2) which returns the shortest chunk that can span from j1 to j2. If (fj2 j1 /ei2 i1) ∈∆, then MinC returns (fj2 j1 /ei2 i1). The algorithm to extract MOS takes ∆and an anchor a = (fj2 j1 /ei2 i1) as input; and outputs the chunk that qualifies as MOS or none. Alg. 1 provides the algorithm to extract the right MOS; the algorithm to extract the left MOS is identical to Alg. 1, except that it scans for chunks to the left of the anchor. In Alg. 1, there are two intermediate parameters si and ei which represent the active search range and should initially be set to j2 + 1 and |F| respectively. Once we obtain a, ML(a) and MR(a), we compute OL(ML(a), a) and OR(MR(a), a) and are ready for training. 1268 To estimate POL and POR, we train discriminative classifiers that predict the orientation values and use the normalized posteriors at decoding time as additional feature scores in SMT’s log linear framework. We train the classifiers on a rich set of binary features ranging from lexical to partof-speech (POS) and to syntactic features. Algorithm 1: Function MREx input : a = (fj2 j1 /ei2 i1), si, ei: int; ∆: chunks output: (fj4 j3 /ei4 i3) : chunk or ∅ (fj4 j3 /ei4 i3) = MinC(∆, j2 + 1, ei) if (j3 == j2 + 1 ∧j4 == ei) then →fj4 j3 /ei4 i3 else if (j2 + 1 == ei) then →∅ else if (ei-2 ≤si) then →MREx(a, si, ei −1, ∆) else m = ⌈(si+ei)/2⌉ (fj4 j3 /ei4 i4) = MinC(∆, j2 + 1, m) if (j3 == j2 + 1) then c = MREx(a, m, ei −1, ∆) if (c == ∅) then →fj4 j3 /ei4 i3 else →c end else →MREx(a, si, m −1, ∆) end end end end Suppose a = (fj2 j1 /ei2 i1), ML(a) = (fj4 j3 /ei4 i3) and ML(a) = (fj6 j5 /ei6 i5), then based on the context’s location, the elementary features employed in our classifiers can be categorized into: 1. anchor-related: slex (the actual word of fj2 j1 ), spos (part-of-speech (POS) tag of slex), sparent (spos’s parent in the parse tree), tlex (ei2 i1’s actual target word).. 2. surrounding: lslex (the previous word / fj1−1 j1−1 ), rslex (the next word / fj2+1 j2+1 ), lspos (lslex’s POS tag), rspos (rslex’s POS tag), lsparent (lslex’s parent), rsparent (rslex’s parent). 3. non-local: lanchorslex (the previous anchor’s word) , ranchorslex (the next anchor’s word), lanchorspos (lanchorslex’s POS tag), ranchorspos (ranchorslex’s POS tag). 4. MOS-related: mosl int slex (the actual word of fj3 j3 ), mosl ext slex (the actual word of fj3 j3 ), mosl int spos (mosl int slex’s POS tag), mosl ext spos (mosl ext spos’s POS tag), mosr int slex (the actual word of fj3 j3 ), mosr ext slex (the actual word of fj3 j3 ), mosr int spos (mosr int slex’s POS tag), mosr ext spos (mosr ext spos’s POS tag). For Model 1, we train one classifier each for POR and POL. For Model 2-4, we train four classifiers for POL for each value of OR. We use only the MOS features for Model 3 and 4. Additionally, we augment the feature set with compound features, e.g. conjunction of the lexical of the anchor and the lexical of the left and the right anchors. Although they increase the number of features significantly, we found that these compound features are empirically beneficial. We come up with > 50 types of features, which consist of a combination of elementary and compound features. In total, we generate hundreds of millions of such features from the training data. To keep the number features to a manageable size, we employ the L1-regularization in training to enforce sparse solutions, using the off-the-shelf LIBLINEAR toolkit (Fan et al., 2008). After training, the number of features in our classifiers decreases to below 5 million features for each classifier. We train PML and PMR via the relative frequency principle. To avoid the sparsity issue, we represent ML as (mosl int spos,mosl ext spos) and MR as (mosr int spos,mosr ext spos). We condition PML and PMR only on spos and the orientation, estimating them as follow: P(ML|spos, OL) = N(ML, spos, OL) N(spos, OL) P(MR|spos, OR) = N(MR, spos, OR) N(spos, OR) where N returns the count of the events in the training data. 1269 Target string (w/ source index) Symbol(s) read Op. Stack(s) (1) Xc have5 dipl.6 rels. [5][6] S,S,R Xc:[5-6] (2) Xd one11 of10 few8 countries9 [11][10] S,S,R [10-11] that7 Xc (3) [8][9] S,S,R,R [8-11] (4) [7] S [8-11][7] (5) Xc:[5,6] S Xd:[8-11][7][5,6] (6) Xb Xd with3 North4 Korea Xd:[8-11][7][5,6] S [8-11][7][5,6] (7) [3][4] S,S,R,R Xb:[8-11][7][3-6] (8) Xa Australia1 is2 Xb [1][2] S,S,R [1-2] (9) Xb:[8-11][7][3,6] S,A Xa:[1-2][8-11][7][3,6] Table 3: The application of the shift-reduce parsing algorithm, which corresponds to Table 2’s derivation. 6 Decoding Integrating the TNO Model into syntax-based SMT systems is non-trivial, especially with the MOS modeling. The method described in Sec. 3 assumes Θ = (F, E, ∼), thus it is only applicable at training or at the last stage of decoding. Since many reordering decisions may have been made at the earlier stages, the late application of TNO model would limit the utility of the model. In this section, we describe an algorithm that facilitates the incremental construction of MOS and the computation of TNO model on partial derivations. The algorithm bears a close resemblance to the shift-reduce algorithm where a stack is used to accumulate (partial) information about a, ML and MR for each a ∈A in the derivation. This algorithm takes an input stream and applies either the shift or the reduce operations starting from the beginning until the end of the stream. The shift operation advances the input stream by one symbol and push the symbol into the stack; while the reduce operation applies some reduction rule to the topmost elements of the stack. The algorithm terminates at the end of the input stream where the resulting stack will be propagated to the parent for the later stage of decoding. In our case, the input stream is the target string of the rule and the symbol is the corresponding source index of the elements of the target string. The reduction rule looks at two indices and merge them if they are adjacent (i.e. has no intervening phrase). We forbid the application of the reduction rule to anchors. Table 3 shows the execution trace of the algorithm for the derivation described in Table 2. As shown, the algorithm starts with an empty stack. It then projects the source index to the corresponding target word and then enumerates the target string in a left to right fashion. If it finds a target word with a source index, it applies the shift operation, pushing the index to the stack. Unless the symbol corresponds to an anchor, it tries to apply the reduce operation. Line (4) indicates the special treatment to the anchor. If the symbol read is a nonterminal, then we push the entire stack that corresponds to that nonterminal. For example, when the algorithm reads Xd at line (6), it pushes the entire stack from line (5). This algorithm facilitates the incremental construction of MOS which may cross rule boundaries. For example, at the end of the application of Xd at line (5), the current left MOS is [5-6]. However, the algorithm grows it to [3-6] after the application of rule Xb at line (7). Furthermore, it allows us to compute the models from partial hypothesis. For example, at line (5), we can compute POL by considering [5,6] as ML to be updated with [3,6] in line (7). This way, we expect our TNO model would play a bigger role at decoding time. Specific to SCFG-based translation, the values of OL and OR are identical in the partial or in the full derivations. For example, the orientation values of de (that)’s left neighbor is always RA. This statement holds, even though at the end of Section 2, we stated that de (that)’s left neighbor may have other orientation values, i.e. RG for CL(a) = (f6 6 /e9 9). The formal proof is omitted, but the intuition comes from the fact that the derivations for SCFG-based translation are subset of ∆(Θ) and that (f6 6 /e9 9) will never become ML for MinC(CL(a), a) respectively (chunk that spans a and CL). Consequently, for Model 1 and Model 2, we can obtain the model score earlier in the decoding process. 1270 7 Experiments Our baseline systems is a state-of-the-art stringto-dependency system (Shen et al., 2008). The system is trained on 10 million parallel sentences that are available to the Phase 1 of the DARPA BOLT Chinese-English MT task. The training corpora include a mixed genre of newswire, weblog, broadcast news, broadcast conversation, discussion forums and comes from various sources such as LDC, HK Law, HK Hansard and UN data. In total, our baseline model employs about 40 features, including four from our proposed Two-Neighbor Orientation model. In addition to the standard features including the rule translation probabilities, we incorporate features that are found useful for developing a state-of-the-art baseline, such as the provenance features (Chiang et al., 2011). We use a large 6-gram language model, which was trained on 10 billion English words from multiple corpora, including the English side of our parallel corpus plus other corpora such as Gigaword (LDC2011T07) and Google News. We also train a class-based language model (Chen, 2009) on two million English sentences selected from the parallel corpus. As the backbone of our string-to-dependency system, we train 3-gram models for left and right dependencies and unigram for head using the target side of the bilingual training data. To train our Two-Neighbor Orientation model, we select a subset of 5 million aligned sentence pairs. For the tuning and development sets, we set aside 1275 and 1239 sentences selected from LDC2010E30 corpus. We tune the decoding weights with PRO (Hopkins and May, 2011) to maximize BLEU-TER. As for the blind test set, we report the performance on the NIST MT08 evaluation set, which consists of 691 sentences from newswire and 666 sentences from weblog. We pick the weights that produce the highest development set scores to decode the test set. Table 4 summarizes the experimental results on NIST MT08 newswire and weblog. In column 2, we report the classification accuracy on a subset of training data. Note that these numbers are for reference only and not directly comparable with each other since the features used in these classifiers include several gold standard information, such as the anchors’ target words, the anchors’ MOSrelated features (Model 3 & 4) and the orientation of the right MOS (Model 2-4); all of which have Acc MT08 nw MT08 wb BLEU TER BLEU TER S2D 36.77 53.28 26.34 57.41 M1 72.5 37.60 52.70 27.59 56.33 M2 77.4 37.86 52.68 27.74 56.11 M3 84.5 38.02 52.42 28.22 55.82 M4 84.5 38.55 52.41 28.44 56.45 Table 4: The NIST MT08 results on newswire (nw) and weblog (wb) genres. S2D is the baseline string-to-dependency system (line 1), on top of which Two-Neighbor Orientation Model 1 to 4 are employed (line 2-5). The best TER and BLEU results on each genre are in bold. For BLEU, higher scores are better, while for TER, lower scores are better. to be predicted at decoding time. In columns 2 and 4, we report the BLEU scores, while in columns 3 and 5, we report the TER scores. The performance of our baseline stringto-dependency syntax-based SMT is shown in the first line, followed by the performance of our TwoNeighbor Orientation model starting from Model 1 to Model 4. As shown, the empirical results confirm our intuition that SMT can greatly benefit from reordering model that incorporate cross-unit contextual information. Model 1 provides most of the gain across the two genres of around +0.9 to +1.2 BLEU and -0.5 to -1.1 TER. Model 2 which conditions POL on OR provides an additional +0.2 BLEU improvement on BLEU score consistently across the two genres. As shown in line 4, we see a stronger improvement in the inclusion of MOS-related information as features in Model 3. In newswire, Model 3 gives an additional +0.4 BLEU and -0.2 TER, while in weblog, it gives a stronger improvement of an additional +0.5 BLEU and -0.3 TER. The inclusion of explicit MOS modeling in Model 4 gives a significant BLEU score improvement of +0.5 but no TER improvement in newswire. In weblog, Model 4 gives a mixed results of +0.2 BLEU score improvement and a hit of +0.6 TER. We conjecture that the weblog text has a more ambiguous orientation span that are more challenging to learn. In total, our TNO model gives an encouraging result. Our most advanced model gives significant improvement of +1.8 BLEU/-0.8 TER in newswire domain and +2.1 BLEU/-1.0 TER over a strong string-to-dependency syntax-based SMT enhanced with additional state-of-the-art features. 1271 8 Related Work Our work intersects with existing work in many different respects. In this section, we mainly focus on work related to the probabilistic conditioning of our TNO model and the MOS modeling. Our TNO model is closely related to the Unigram Orientation Model (UOM) (Tillman, 2004), which is the de facto reordering model of phrasebased SMT (Koehn et al., 2007). UOM views reordering as a process of generating (b, o) in a left-to-right fashion, where b is the current phrase pair and o is the orientation of b with the previously generated phrase pair b′. UOM makes strong independence assumptions and formulates the model as P(o|b). Tillmann and Zhang (2007) proposed a Bigram Orientation Model (BOM) to include both phrase pairs (b and b′) into the model. Their original intent is to model P(o, b|o′, b′), but perhaps due to sparsity concerns, they settle with P(o|b, b′), dropping the conditioning on the previous orientation o′. Subsequent improvements use the P(o|b, b′) formula, for example, for incorporating various linguistics feature like part-ofspeech (Zens and Ney, 2006), syntactic (Chang et al., 2009), dependency information (Bach et al., 2009) and predicate-argument structure (Xiong et al., 2012). Our TNO model is more faithful to the BOM’s original formulation. Our MOS concept is also closely related to hierarchical reordering model (Galley and Manning, 2008) in phrase-based decoding, which computes o of b with respect to a multi-block unit that may go beyond b′. They mainly use it to avoid overestimating “discontiguous” orientation but fall short in modeling the multi-block unit, perhaps due to data sparsity issue. Our MOS is also closely related to the efforts of modeling the span of hierarchical phrases in formally syntax-based SMT. Early works reward/penalize spans that respect the syntactic parse constituents of an input sentence (Chiang, 2005), and (Marton and Resnik, 2008). (Xiong et al., 2009) learn the boundaries from parsed and aligned training data, while (Xiong et al., 2010) learn the boundaries from aligned training data alone. Recent work couples span modeling tightly with reordering decisions, either by adding an additional feature for each hierarchical phrase (Chiang et al., 2008; Shen et al., 2009) or by refining the nonterminal label (Venugopal et al., 2009; Huang et al., 2010; Zollmann and Vogel, 2011). Common to this work is that the spans modeled may not correspond to MOS, which may be suboptimal as discussed in Sec. 3. In equating anchors with the function word class, our work, particularly Model 1, is closely related to the function word-centered model of Setiawan et al. (2007) and Setiawan et al. (2009). However, we provide a discriminative treatment to the model to include a richer set of features including the MOS modeling. Our work in incorporating global context also intersects with existing work in Preordering Model (PM), e.g. (Niehues and Kolss, 2009; Costa-juss`a and Fonollosa, 2006; Genzel, 2010; Visweswariah et al., 2011; Tromble and Eisner, 2009). The goal of PM is to reorder the input sentence F into F ′ whose order is closer to the target language order, whereas the goal of our model is to directly reorder F into the target language order. The crucial difference is that we have to integrate our model into SMT decoder, which is highly non-trivial. 9 Conclusion We presented a novel approach to address a kind of long-distance reordering that requires global cross-boundary contextual information. Our approach, which we formulate as a Two-Neighbor Orientation model, includes the joint modeling of two orientation decisions and the modeling of the maximal span of the reordered chunks through the concept of Maximal Orientation Span. We describe four versions of the model and implement an algorithm to integrate our proposed model into a syntax-based SMT system. Empirical results confirm our intuition that incorporating crossboundaries contextual information improves translation quality. In a large scale Chinese-to-English translation task, we achieve a significant improvement over a strong baseline. In the future, we hope to continue this line of research, perhaps by learning to identify anchors automatically from training data, incorporating a richer set of linguistics features such as dependency structure and strengthening the modeling of Maximal Orientation Span. Acknowledgements We would like to acknowledge the support of DARPA under Grant HR0011-12-C-0015 for funding part of this work. The views, opinions, and/or findings contained in this article/presentation are those of the author/ presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the DARPA. 1272 References Nguyen Bach, Qin Gao, and Stephan Vogel. 2009. Source-side dependency tree reordering models with subtree movements and constraints. In Proceedings of the Twelfth Machine Translation Summit (MTSummit-XII), Ottawa, Canada, August. International Association for Machine Translation. Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative reordering with Chinese grammatical relations features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation (SSST-3) at NAACL HLT 2009, pages 51–59, Boulder, Colorado, June. Association for Computational Linguistics. Stanley Chen. 2009. Shrinking exponential language models. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 468–476, Boulder, Colorado, June. Association for Computational Linguistics. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 224–233, Honolulu, Hawaii, October. David Chiang, Steve DeNeefe, and Michael Pust. 2011. Two easy improvements to lexical weighting. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 455–460, Portland, Oregon, USA, June. Association for Computational Linguistics. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 263–270, Ann Arbor, Michigan, June. Association for Computational Linguistics. Marta R. Costa-juss`a and Jos´e A. R. Fonollosa. 2006. Statistical machine reordering. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 70–76, Sydney, Australia, July. Association for Computational Linguistics. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 848–856, Honolulu, Hawaii, October. Association for Computational Linguistics. Dmitriy Genzel. 2010. Automatically learning sourceside reordering rules for large scale machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 376–384, Beijing, China, August. Coling 2010 Organizing Committee. Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352–1362, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Zhongqiang Huang, Martin Cmejrek, and Bowen Zhou. 2010. Soft syntactic constraints for hierarchical phrase-based translation using latent syntactic distributions. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 138–147, Cambridge, MA, October. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation, June. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proceedings of The 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1003– 1011, Columbus, Ohio, June. Masaaki Nagata, Kuniko Saito, Kazuhide Yamamoto, and Kazuteru Ohashi. 2006. A clustered global phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 713–720, Sydney, Australia, July. Association for Computational Linguistics. Jan Niehues and Muntsin Kolss. 2009. A POS-based model for long-range reorderings in SMT. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 206–214, Athens, Greece, March. Association for Computational Linguistics. Hendra Setiawan, Min-Yen Kan, and Haizhou Li. 2007. Ordering phrases with function words. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 712– 719, Prague, Czech Republic, June. Association for Computational Linguistics. Hendra Setiawan, Min Yen Kan, Haizhou Li, and Philip Resnik. 2009. Topological ordering of function words in hierarchical phrase-based translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing 1273 of the AFNLP, pages 324–332, Suntec, Singapore, August. Association for Computational Linguistics. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL-08: HLT, pages 577–585, Columbus, Ohio, June. Association for Computational Linguistics. Libin Shen, Jinxi Xu, Bing Zhang, Spyros Matsoukas, and Ralph Weischedel. 2009. Effective use of linguistic and contextual information for statistical machine translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 72–80, Singapore, August. Association for Computational Linguistics. Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. In HLT-NAACL 2004: Short Papers, pages 101–104, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Christoph Tillmann and Tong Zhang. 2007. A block bigram prediction model for statistical machine translation. ACM Transactions on Speech and Language Processing (TSLP), 4(3). Roy Tromble and Jason Eisner. 2009. Learning linear ordering problems for better translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1007–1016, Singapore, August. Association for Computational Linguistics. Ashish Venugopal, Andreas Zollmann, Noah A. Smith, and Stephan Vogel. 2009. Preference grammars: Softening syntactic constraints to improve statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 236–244, Boulder, Colorado, June. Association for Computational Linguistics. Karthik Visweswariah, Rajakrishnan Rajkumar, Ankur Gandhe, Ananthakrishnan Ramanathan, and Jiri Navratil. 2011. A word reordering model for improved machine translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 486–496, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Deyi Xiong, Min Zhang, Aiti Aw, and Haizhou Li. 2009. A syntax-driven bracketing model for phrasebased translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 315– 323, Suntec, Singapore, August. Association for Computational Linguistics. Deyi Xiong, Min Zhang, and Haizhou Li. 2010. Learning translation boundaries for phrase-based decoding. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 136–144, Los Angeles, California, June. Association for Computational Linguistics. Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Modeling the translation of predicate-argument structure for smt. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 902–911, Jeju Island, Korea, July. Association for Computational Linguistics. Richard Zens and Hermann Ney. 2006. Discriminative reordering models for statistical machine translation. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL): Proceedings of the Workshop on Statistical Machine Translation, pages 55–63, New York City, NY, June. Association for Computational Linguistics. Andreas Zollmann and Stephan Vogel. 2011. A wordclass approach to labeling pscfg rules for machine translation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1–11, Portland, Oregon, USA, June. Association for Computational Linguistics. 1274
2013
124
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1275–1284, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Cut the noise: Mutually reinforcing reordering and alignments for improved machine translation Karthik Visweswariah IBM Research India [email protected] Mitesh M. Khapra IBM Research India [email protected] Ananthakrishnan Ramanathan IBM Research India [email protected] Abstract Preordering of a source language sentence to match target word order has proved to be useful for improving machine translation systems. Previous work has shown that a reordering model can be learned from high quality manual word alignments to improve machine translation performance. In this paper, we focus on further improving the performance of the reordering model (and thereby machine translation) by using a larger corpus of sentence aligned data for which manual word alignments are not available but automatic machine generated alignments are available. The main challenge we tackle is to generate quality data for training the reordering model in spite of the machine alignments being noisy. To mitigate the effect of noisy machine alignments, we propose a novel approach that improves reorderings produced given noisy alignments and also improves word alignments using information from the reordering model. This approach generates alignments that are 2.6 f-Measure points better than a baseline supervised aligner. The data generated allows us to train a reordering model that gives an improvement of 1.8 BLEU points on the NIST MT-08 Urdu-English evaluation set over a reordering model that only uses manual word alignments, and a gain of 5.2 BLEU points over a standard phrase-based baseline. 1 Introduction Dealing with word order differences between source and target languages presents a significant challenge for machine translation systems. Failing to produce target words in the correct order results in machine translation output that is not fluent and is often very hard to understand. These problems are particularly severe when translating between languages which have very different structure. Phrase based systems (Koehn et al., 2003) use lexicalized distortion models (Al-Onaizan and Papineni, 2006; Tillman, 2004) and scores from the target language model to produce words in the correct order in the target language. These systems typically are only able to capture short range reorderings and the amount of data required to potentially capture longer range reordering phenomena is prohibitively large. There has been a large body of work showing the efficacy of preordering source sentences using a source parser and applying hand written or automatically learned rules (Collins et al., 2005; Wang et al., 2007; Ramanathan et al., 2009; Xia and McCord, 2004; Genzel, 2010; Visweswariah et al., 2010). Recently, approaches that address the problem of word order differences between the source and target language without requiring a high quality source or target parser have been proposed (DeNero and Uszkoreit, 2011; Visweswariah et al., 2011; Neubig et al., 2012). These methods use a small corpus of manual word alignments (where the words in the source sentence are manually aligned to the words in the target sentence) to learn a model to preorder the source sentence to match target order. In this paper, we build upon the approach in (Visweswariah et al., 2011) which uses manual word alignments for learning a reordering model. Specifically, we show that we can significantly improve reordering performance by using a large number of sentence pairs for which manual word alignments are not available. The motivation for going beyond manual word alignments is clear: the reordering model can have millions of features and estimating weights for the features on thousands of sentences of manual word alignments is 1275 likely to be inadequate. One approach to deal with this problem would be to use only part-of-speech tags as features for all but the most frequent words. This will cut down on the number of features and perhaps the model would be learnable with a small set of manual word alignments. Unfortunately, as we will see in the experimental section, leaving out lexical information from the models hurts performance even with a relatively small set of manual word alignments. Another option would be to collect more manual word alignments but this is undesirable because it is time consuming and expensive. The challenge in going beyond manual word alignments and using machine alignments is the noise in the machine alignments which affects the performance of the reordering model (see Section 5). We illustrate this with the help of a motivating example. Consider the example English sentence and its translation shown in Figure 1. He went to the stadium to play vaha khelne keliye stadium ko gaya Figure 1: An example English sentence with its Urdu translation with alignment links. Red (dotted) links are incorrect links while the blue (dashed) links are the corresponding correct links. A standard word alignment algorithm that we used (McCarley et al., 2011) made the mistake of mis-aligning the Urdu ko and keliye (it switched the two). Deriving reference reorderings from these wrong alignments would give us an incorrect reordering. A reordering model trained on such incorrect reorderings would obviously perform poorly. Our task is thus two-fold (i) improve the quality of machine alignments (ii) use these less noisy alignments to derive cleaner training data for a reordering model. Before proceeding, we first point out that the two tasks, viz., reordering and word alignment are related: Having perfect reordering makes the alignment task easier while having perfect alignments in turn makes the task of finding reorderings trivial. Motivated by this fact, we introduce models that allow us to connect the source/target reordering and the word alignments and show that these models help in mutually improving the performance of word alignments and reordering. Specifically, we build two models: the first scores reorderings given the source sentence and noisy alignments, the second scores alignments given the noisy source and target reorderings and the source and target sentences themselves. The second model helps produce better alignments, while we use the first model to help generate better reference reordering given noisy alignments. These improved reference reorderings will then be used to train a reordering model. Our experiments show that reordering models trained using these improved machine alignments perform significantly better than models trained only on manual word alignments. This results in a 1.8 BLEU point gain in machine translation performance on an Urdu-English machine translation task over a preordering model trained using only manual word alignments. In all, this increases the gain in performance by using the preordering model to 5.2 BLEU points over a standard phrasebased system with no preordering. The rest of this paper is structured as follows. Section 2 describes the main reordering issues in Urdu-English translation. Section 3 introduces the reordering modeling framework that forms the basis for our work. Section 4 describes the two models we use to tie together reordering and alignments and how we use these models to generate training data for training our reordering model. Section 5 presents the experimental setup used for evaluating the models proposed in this paper on an Urdu-English machine translation task. Section 6 presents the results of our experiments. We describe related work in Section 7 and finally present some concluding remarks and potential future work in Section 8. 2 Reordering issues in Urdu-English translation In this section we describe the main sources of word order differences between Urdu and English since this is the language pair we experiment with in this paper. The typical word order in Urdu is SubjectObject-Verb unlike English in which the order is Subject-Verb-Object. Urdu has case markers that sometimes (but not always) mark the subject and the object of a sentence. This difference in the placement of verbs can often lead to movements of verbs over long distances (depending on the number of words in the object). Phrase based systems do not capture such long distance movements well. 1276 Another difference is that Urdu uses postpositions unlike English which uses prepositions. This can also lead to long range movements depending on the length of the noun phrase that the post-position follows. The order of noun phrases and prepositional phrases is also swapped in Urdu as compared with English. 3 Reordering model In this section we briefly describe the reordering model (Visweswariah et al., 2011) that forms the basis of our work. We also describe an approximation we make in the training process that significantly speeds up the training without much loss of accuracy which enables training on much larger data sets. Consider a source sentence w that we would like to reorder to match the target order. Let π represent a candidate permutation of the source sentence w. πi denotes the index of the word in the source sentence that maps to position i in the candidate reordering, thus reordering with this candidate permutation π we will reorder the sentence w to wπ1, wπ2, ..., wπn. The reordering model we use assigns costs to candidate permutations as: C(π|w) = X i c(πi−1, πi). The costs c(m, n) are pairwise costs of putting wm immediately before wn in the reordering. We reorder the sentence w according to the permutation π that minimizes the cost C(π|w). We find the minimal cost permutation by converting the problem into a symmetric Travelling Salesman Problem (TSP) and then using an implementation of the chained Lin-Kernighan heuristic (Applegate et al., 2003). The costs in the reordering model c(m, n) are parameterized by a linear model: c(m, n) = θT Φ(w, m, n) where θ is a learned vector of weights and Φ is a vector of binary feature functions that inspect the words and POS tags of the source sentence at and around positions m and n. We use the features (Φ) described in Visweswariah et al. (2011) that were based on features used in dependency parsing (McDonald et al., 2005a). To learn the weight vector θ we require a corpus of sentences w with their desired reorderings π∗. Past work Visweswariah et al. (2011) used high quality manual word alignments to derive the desired reorderings π∗as follows. Given word aligned source and target sentences, we drop the source words that are not aligned1. Let mi be the mean of the target word positions that the source word at index i is aligned to. We then sort the source indices in increasing order of mi (this order defines π∗). If mi = mj (for example, because wi and wj are aligned to the same set of words) we keep them in the same order that they occurred in the source sentence. We used the single best Margin Infused Relaxed Algorithm (MIRA) (McDonald et al. (2005b), Crammer and Singer (2003)) with online updates to our parameters given by: θi+1 = arg min θ ||θ −θi|| s.t. C(π∗|w) < C(ˆπ|w) −L(π∗, ˆπ). In the equation above, ˆπ = arg minπ C(π|w) is the best reordering based on the current parameter value θi and L is a loss function. We take L to be the number of words for which the hypothesized permutation ˆπ has a different preceding word as compared with the reference permutation π∗. In this paper we focus on the case where in addition to using a relatively small number of manual word aligned sentences to derive the reference permutations π∗used to train our model, we would like to use more abundant but noisier machine aligned sentence pairs. To handle the larger amount of training data we obtain from machine alignments, we make an approximation in training that we found empirically to not affect performance but that makes training faster by more than a factor of five. This allows us to train the reordering model with roughly 150K sentences in about two hours. The approximation we make is that instead of using the chained LinKernighan heuristic to solve the TSP problem to find ˆπ = arg minπ C(π|w), we select greedily for each word the preceding word that has the lowest cost2. Using ψi to denote arg minj c(j, i) and letting C(ψ|w) = X i c(ψi, i), 1Note that the unaligned source words are dropped only at the time of training. At the time of testing all source words are retained as the alignment information is obviously not available at test time. 2It should be noted that this approximation was done only at the time of training. At the time of testing we still use the chained Lin-Kernighan heuristic to solve the TSP problem. 1277 we do the update according to: θi+1 = arg min θ ||θ −θi|| s.t. C(π∗|w) < C(ψ|w) −L(π∗, ψ). Again the loss L(π∗, ψ) is the number of positions i for which π∗ i−1 is different from ψi−1. 4 Generating reference reordering from parallel sentences The main aim of our work is to improve the reordering model by using parallel sentences for which manual word alignments are not available. In other words, we want to generate relatively clean reference reorderings from parallel sentences and use them for training a reordering model. A straightforward approach for this is to use a supervised aligner to align the words in the sentences and then derive the reference reordering as we do for manual word alignments. However, as we will see in the experimental results, the quality of a reordering model trained from automatic alignments is very sensitive to the quality of alignments. This motivated us to explore if we can further improve our aligner and the method for generating reference reorderings given alignments. We improve upon the above mentioned basic approach by coupling the tasks of reordering and word alignment. We do this by building a reordering model (C(πs|ws, wt, a)) that scores reorderings πs given the source sentence ws, target sentence wt and machine alignments a. Complementing this model, we build an alignment model (P(a|ws, wt, πs, πt)) that scores alignments a given the source and target sentences and their predicted reorderings according to source and target reordering models. The model (C(πs|ws, wt, a)) helps to produce better reference reorderings for training our final reordering model given fixed machine alignments and the alignment model (P(a|ws, wt, πs, πt)) helps improve the machine alignments taking into account information from reordering models. In the following sections, we describe our overall approach followed by a description of the two models. 4.1 Overall approach to generating training data We first describe our overall approach to generating training data for the reordering model given a small corpus of sentences with manual C(πs|ws) C(πt|wt) Step 1: Train reordering models using manual word alignments P(a|ws, wt, πs, πt) C(πs|ws, a) C(πt|wt, a) Step 2: Feed predictions of the reordering models to the alignment model Step 3: Feed predictions of the alignment model to the reordering models Figure 2: Overall approach: Building a sequence of reordering and alignment models. word alignments (H) and a much larger corpus of parallel sentences (U) that are not word aligned. The basic idea is to chain together the two models, viz., reordering model and alignment model, as illustrated in Figure 2. The steps involved are as described below: Step 1: First, we use manual word alignments (H) to train source and target reordering models as described in (Visweswariah et al., 2011). Step 2: Next, we use the hand alignments to train an alignment model P(a|ws, wt, πs, πt). In addition to the original source and target sentence, we also feed the predictions of the reordering model trained in Step 1 to this alignment model (see section 4.2 for details of the model itself). Step 3: Finally, we use the predictions of the alignment model trained in Step 2 to train reordering models C(πs|ws, wt, a) (see section 4.3 for details on the reordering model itself). After building the sequence of models shown in Figure 2, we apply them in sequence on the unaligned parallel data U, starting with the reordering models C(πs|ws) and C(πt|wt). The reorderings obtained for the source side in U (after applying the final model C(πs|ws, a)) are used along with reference reorderings obtained from the manual word alignments to train our reordering model. Note that, in theory, we could iterate over steps 2 and 3 several times but, in practice we did not see a benefit of going beyond one iter1278 ation in our experiments. Also, since we are interested only in the source side reorderings produced by the model C(πs|ws, a), the target reordering model C(πt|wt, a) is needed only if we iterate over steps 2 and 3. We now point to some practical considerations of our approach. Consider the case when we are training an alignment model conditioned on reorderings (P(a|ws, wt, πs, πt)). If the reordering model that generated these reorderings πs, πt were trained on the same data that we are using to train the alignment model, then the reorderings would be much better than we would expect on unseen test data, and hence the alignment model (P(a|ws, wt, πs, πt)) may learn to make the alignment overly consistent with the reorderings πs and πt. To counter this problem, we divide the training data H into K parts and at each stage we apply a model (reordering or alignment) on part i that had not seen part i in training. This ensures that the alignment model does not see very optimistic reorderings and vice versa. We now describe the individual models, viz., P(a|ws, wt, πs, πt) and C(πs|ws, a). 4.2 Modeling alignments given reordering In this section we describe how we fuse information from source and target reordering models to improve word alignments. As a base model we use the correction model for word alignments proposed by McCarley et al. (2011). This model was significantly better than the MaxEnt aligner (Ittycheriah and Roukos, 2005) and is also flexible in the sense that it allows for arbitrary features to be introduced while still keeping training and decoding tractable by using a greedy decoding algorithm that explores potential alignments in a small neighborhood of the current alignment. The model thus needs a reasonably good initial alignment to start with for which we use the MaxEnt aligner (Ittycheriah and Roukos, 2005) as in McCarley et al. (2011). The correction model is a log-linear model: P(a|ws, wt) = exp(λT φ(a, ws, wt)) Z(ws, wt) . The λs are trained using the LBFGS algorithm (Liu et al., 1989) to maximize the log-likelihood smoothed with L2 regularization. The feature functions φ we start with are those used in McCarley et al. (2011) and include features encoding the Model 1 probabilities between pairs of words linked in the alignment a, features that inspect source and target POS tags and parses (if available) and features that inspect the alignments of adjacent words in the source and target sentence. To incorporate information from the reordering model, we add features that use the predicted source πs and target permutations πt. We introduce some notation to describe these features. Let Sm and Sn be the set of indices of target words that ws m and ws n are aligned to respectively. We define the minimum signed distance (msd) between these two sets as: msd(Sm, Sn) = i∗−j∗ where, (i∗, j∗) = arg min (i,j)∈Sm×Sn |i −j| We quantize and encode with binary features the minimum signed distance between the sets of the indices of the target words that source words adjacent in the reordering πs (ws πs i and ws πs i+1) are aligned to. We instantiate similar features with the roles of source and target sentences reversed. With this addition of features we use the same training and testing procedure as in McCarley et al. (2011). If the reorderings πs were perfect we would learn to only allow alignments where ws πs i and ws πs i+1 were aligned to adjacent words in the target sentence. Although the reordering model is not perfect, preferring alignments consistent with the reordering models improves the aligner. 4.3 Modeling reordering given alignments To model source permutations given source (ws) and target (wt) sentences, and alignments (a) we reuse the reordering model framework described in Section 3 adding additional features capturing the relation between a hypothesized permutation π and alignments a. To allow for searching via the same TSP formulation we once again assign costs to candidate permutations as: C(πs|ws, wt, a) = X i c(πi−1, πi|ws, a). Note that we introduce a dependence on the target sentence wt only through the alignment a. Once again we parameterize the costs by a linear model: c(m, n) = θT Φ(ws, a, m, n). For the feature functions Φ, in addition to the features that only depend on ws, m, n (that we 1279 use in our standard reordering model) we add binary indicator features based on msd(Sm, Sn) and msd(Sm, Sn) conjoined with POS(ws m) and POS(ws n). Here, Sm and Sn are the set of indices of target words that ws m and ws n are aligned to respectively. We conjoin the msd (minimum signed distance) with the POS tags to allow the model to capture the fact that the alignment error rate maybe higher for some POS tags than others (e.g., we have observed verbs have a higher error rate in Urdu-English alignments). Given these features we train the parameters θ using the MIRA algorithm as described in Section 3. Using this model, we can find the lowest cost permutation C(πs|ws, a) using the LinKernighan heuristic as described in Section 3. This model allows us to combine features from the original reordering model along with information coming from the alignments to find source reorderings given a parallel corpus and alignments. We will see in the experimental section that this improves upon the simple heuristic for deriving reorderings described in Section 3. 5 Experimental setup In this section we describe the experimental setup that we used to evaluate the models proposed in this paper. All experiments were done on UrduEnglish and we evaluate reordering in two ways: Firstly, we evaluate reordering performance directly by comparing the reordered source sentence in Urdu with a reference reordering obtained from the manual word alignments using BLEU (Papineni et al., 2002) (we call this measure monolingual BLEU or mBLEU). All mBLEU results are reported on a small test set of about 400 sentences set aside from our set of sentences with manual word alignments. Additionally, we evaluate the effect of reordering on our final systems for machine translation measured using BLEU. We use about 10K sentences (180K words) of manual word alignments which were created in house using part of the NIST MT-08 training data3 to train our baseline reordering model and to train our supervised machine aligners. We use a parallel corpus of 3.9M words consisting of 1.7M words from the NIST MT-08 training data set and 2.2M words extracted from parallel news stories on the 3http://www.ldc.upenn.edu web4. The parallel corpus is used for building our phrased based machine translation system and to add training data for our reordering model. For our English language model, we use the Gigaword English corpus in addition to the English side of our parallel corpus. Our Part-of-Speech tagger is a Maximum Entropy Markov model tagger trained on roughly fifty thousand words from the CRULP corpus (Hussain, 2008). For our machine translation experiments, we used a standard phrase based system (Al-Onaizan and Papineni, 2006) with a lexicalized distortion model with a window size of +/-4 words5. To extract phrases we use HMM alignments along with higher quality alignments from a supervised aligner (McCarley et al., 2011). We report results on the (four reference) NIST MT-08 evaluation set in Table 4 for the News and Web conditions. The News and Web conditions each contain roughly 20K words in the test set, with the Web condition containing more informal text from the web. 6 Results and Discussions We now discuss the results of our experiments. Need for additional data: We first show the need for additional data in Urdu-English reordering. Column 2 of Table 1 shows mBLEU as a function of the number of sentences with manual word alignments that are used to train the reordering model. We see a roughly 3 mBLEU points drop in performance per halving of data indicating a potential for improvement by adding more data. Using fewer features: We compare the performance of a model trained using lexical features for all words (Column 2 of Table 1) with a model trained using lexical features only for the 1000 most frequent words (Column 3 of Table 1). The motivation for this is to explore if a good model can be learned even from a small amount of data if we restrict the number of features in a reasonable manner. However, we see that even with only 2.4K sentences with manual word alignments our model benefits from lexical identities of more than the 1000 most frequent words. Effect of quality of machine alignments: We next look at the use of automatically generated 4http://centralasiaonline.com 5Note that the same window size of +/-4 words was used for all the systems, i.e., the baseline system as well as the systems using different preordering techniques. 1280 Data size All features Frequent lex only 10K 52.5 50.8 5K 49.6 49.0 2.5K 46.6 46.2 Table 1: mBLEU scores for Urdu to English reordering using different number of sentences of manually word aligned training data with all features and with lexical features instantiated only for the 1000 most frequent words. machine alignments to train the reordering model and see the effect of aligner quality on the reordering model generated using this data. These experiments also form the baseline for the models we propose in this paper to clean up alignments. We experimented with two different supervised aligners : a maximum entropy aligner (Ittycheriah and Roukos, 2005) and an improved correction model that corrects the maximum entropy alignments (McCarley et al., 2011). Aligner Train size mBLEU Type f-Measure (words) None 35.5 Manual 180K 52.5 MaxEnt 70.0 3.9M 49.5 Correction model 78.1 3.9M 55.1 Table 2: mBLEU scores for Urdu to English reordering using models trained on different data sources and tested on a development set of 8017 Urdu tokens. Table 2 shows mBLEU scores when the reordering model is trained on reordering references created from aligners with different quality. We see that the quality of the alignments matter a great deal to the reordering model; using MaxEnt alignments cause a degradation in performance over just using a small set of manual word alignments. The alignments obtained using the aligner of McCarley et al. (2011) are of much better quality and hence give higher reordering performance. Note that this reordering performance is much better than that obtained using manual word alignments because the size of machine alignments is much larger (3.9M v/s 180K words). Improvements in reordering performance using the proposed models: Table 3 shows improvements in the reordering model when using the models proposed in this paper. We use H to refer to the manually word aligned data and U to refer to the additional sentence pairs for which manual word alignments are not available. We report the following numbers : 1. Base correction model: This is the baseline where we use the correction model of McCarley et al. (2011) for generating word alignments. The f-Measure of this aligner is 78.1% (see row 1, column 2). Corresponding to this, we also report the baseline for our reordering experiments in the third column. Here, we first generate word alignments for U using the aligner of McCarley et al. (2011) and then extract reference reorderings from these alignments. We then combine these reference reorderings with the reference reorderings derived from H and use this combined data to train a reordering model which serves as the baseline (mBLEU = 55.1). 2. Correction model, C(π|a): Here, once again we generate alignments for U using the correction model of McCarley et al. (2011). However, instead of using the basic approach of extracting reference reorderings, we use our improved model C(π|a) to generate reference reorderings from U. These reference reorderings are again combined with the reference reorderings derived from H and used to train a reordering model (mBLEU = 56.4). 3. P(a|π), C(π|a): Here, we build the entire sequence of models shown in Figure 2. The alignment model P(a|π) is first improved by using predictions from the reordering model. These improved alignments are then used to extract better reference reorderings from U using C(π|a). We see substantial improvements over simply adding in the data from the machine alignments. Improvements come roughly in equal parts from the two techniques we proposed in this paper : (i) using a model to generate reference reorderings from noisy alignments and (ii) using reordering information to improve the aligner. Method f-Measure mBLEU Base Correction model 78.1 55.1 Correction model, C(π|a) 78.1 56.4 P(a|π), C(π|a) 80.7 57.6 Table 3: mBLEU with different methods to generate reordering model training data from a machine aligned parallel corpus in addition to manual word alignments. Improvements in MT performance using the proposed models: We report results for a phrase based system with different preordering techniques. For results including a reordering model, we simply reorder the source side Urdu data both while training and at test time. In addition to 1281 phrase based systems with different preordering methods, we also report on a hierarchical phrase based system for which we used Joshua 4.0 (Ganitkevitch et al., 2012). We see a significant gain of 1.8 BLEU points in machine translation by going beyond manual word alignments using the best reordering model reported in Table 3. We also note a gain of 2.0 BLEU points over a hierarchical phrase based system. System type MT-08 eval Web News All Baseline (no preordering) 18.4 25.6 22.2 Hierarchical phrase based 19.6 30.7 25.4 Reordering: Manual alignments 20.7 30.0 25.6 + Machine alignments simple 21.3 30.9 26.4 + machine alignments, model based 22.1 32.2 27.4 Table 4: MT performance without preordering (phrase based and hierarchical phrase based), and with reordering models using different data sources (phrase based). 7 Related work Dealing with the problem of handling word order differences in machine translation has recently received much attention. The approaches proposed for solving this problem can be broadly divided into 3 sets as discussed below. The first set of approaches handle the reordering problem as part of the decoding process. Hierarchical models (Chiang, 2007) and syntax based models (Yamada and Knight, 2002; Galley et al., 2006; Liu et al., 2006; Zollmann and Venugopal, 2006) improve upon the simpler phrase based models but with significant additional computational cost (compared with phrase based systems) due to the inclusion of chart based parsing in the decoding process. Syntax based models also require a high quality source or target language parser. The second set of approaches rely on a source language parser and treat reordering as a separate process that is applied on the source language sentence at training and test time before using a standard approach to machine translation. Preordering the source data with hand written or automatically learned rules is effective and efficient (Collins et al., 2005; Wang et al., 2007; Ramanathan et al., 2009; Xia and McCord, 2004; Genzel, 2010; Visweswariah et al., 2010) but requires a source language parser. Recent approaches that avoid the need for a source or target language parser and retain the efficiency of preordering models were proposed in (Tromble and Eisner, 2009; DeNero and Uszkoreit, 2011; Visweswariah et al., 2011; Neubig et al., 2012). (DeNero and Uszkoreit, 2011; Visweswariah et al., 2011; Neubig et al., 2012) focus on the use of manual word alignments to learn preordering models and in both cases no benefit was obtained by using the parallel corpus in addition to manual word alignments. Our work is an extension of Visweswariah et al. (2011) and we focus on being able to incorporate relatively noisy machine alignments to improve the reordering model. In addition to being related to work in reordering, our work is also more broadly related to several other efforts which we now outline. Setiawan et al. (2010) proposed the use of function word reordering to improve alignments. While this work is similar to one of our models (model of alignments given reordering) we differ in using a reordering model of all words (not just function words) and both source and target sentences (not just the source sentence). The task of directly learning a reordering model for language pairs that are very different is closely related to the task of parsing and hence work on semi-supervised parsing (Koo et al., 2008; McClosky et al., 2006; Suzuki et al., 2009) is broadly related to our work. Our work coupling reordering and alignments is also similar in spirit to approaches where parsing and alignment are coupled (Wu, 1997). 8 Conclusion In the paper we showed that a reordering model can benefit from data beyond a relatively small corpus of manual word alignments. We proposed a model that scores reorderings given alignments and the source sentence that we use to generate cleaner training data from noisy alignments. We also proposed a model that scores alignments given source and target sentence reorderings that improves a supervised alignment model by 2.6 points in f-Measure. While the improvement in alignment performance is modest, the improvement does result in improved reordering models. Cumulatively, we see a gain of 1.8 BLEU points over a baseline reordering model that only uses manual word alignments, a gain of 2.0 BLEU points over a hierarchical phrase based system, and a gain of 5.2 BLEU points over a phrase based 1282 system that uses no source preordering on a publicly available Urdu-English test set. As future work we would like to evaluate our models on other language pairs. Another avenue of future work we would like to explore is the use of monolingual source and target data to further assist the reordering model. We hope to be able to learn lexical information such as how many arguments a verb takes, what nouns are potential subjects for a given verb by gathering statistics from an English parser and projecting to the source language via our word/phrase translation table. References Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion models for statistical machine translation. In Proceedings of ACL, ACL-44, pages 529–536, Morristown, NJ, USA. Association for Computational Linguistics. David Applegate, William Cook, and Andre Rohe. 2003. Chained lin-kernighan for large traveling salesman problems. In INFORMS Journal On Computing. David Chiang. 2007. Hierarchical phrase-based translation. Comput. Linguist., 33(2):201–228, June. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL, pages 531–540, Morristown, NJ, USA. Association for Computational Linguistics. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. J. Mach. Learn. Res., 3:951–991, March. John DeNero and Jakob Uszkoreit. 2011. Inducing sentence structure from parallel corpora for reordering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 193–203, Stroudsburg, PA, USA. Association for Computational Linguistics. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 961–968, Stroudsburg, PA, USA. Association for Computational Linguistics. Juri Ganitkevitch, Yuan Cao, Jonathan Weese, Matt Post, and Chris Callison-Burch. 2012. Joshua 4.0: Packing, pro, and paraphrases. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 283–291, Montr´eal, Canada, June. Association for Computational Linguistics. Dmitriy Genzel. 2010. Automatically learning sourceside reordering rules for large scale machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics. Sarmad Hussain. 2008. Resources for Urdu language processing. In Proceedings of the 6th Workshop on Asian Language Resources, IJCNLP’08. Abraham Ittycheriah and Salim Roukos. 2005. A maximum entropy word aligner for Arabic-English machine translation. In Proceedings of HLT/EMNLP, HLT ’05, pages 89–96, Stroudsburg, PA, USA. Association for Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In ACL, pages 595–603. Dong C. Liu, Jorge Nocedal, and Dong C. 1989. On the limited memory bfgs method for large scale optimization. Mathematical Programming, 45:503– 528. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 609–616, Stroudsburg, PA, USA. Association for Computational Linguistics. J. Scott McCarley, Abraham Ittycheriah, Salim Roukos, Bing Xiang, and Jian-ming Xu. 2011. A correction model for word alignments. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 889– 898, Stroudsburg, PA, USA. Association for Computational Linguistics. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In HLT-NAACL. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 91–98, Stroudsburg, PA, USA. Association for Computational Linguistics. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT. Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a discriminative parser to optimize machine translation reordering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational 1283 Natural Language Learning, pages 843–853, Jeju Island, Korea, July. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Ananthakrishnan Ramanathan, Hansraj Choudhary, Avishek Ghosh, and Pushpak Bhattacharyya. 2009. Case markers and morphology: addressing the crux of the fluency problem in English-Hindi smt. In Proceedings of ACL-IJCNLP. Hendra Setiawan, Chris Dyer, and Philip Resnik. 2010. Discriminative word alignment with a function word reordering model. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 534–544, Stroudsburg, PA, USA. Association for Computational Linguistics. Jun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An empirical study of semisupervised structured conditional models for dependency parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2, EMNLP ’09, pages 551–560, Stroudsburg, PA, USA. Association for Computational Linguistics. Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. In Proceedings of HLT-NAACL. Roy Tromble and Jason Eisner. 2009. Learning linear ordering problems for better translation. In Proceedings of EMNLP. Karthik Visweswariah, Jiri Navratil, Jeffrey Sorensen, Vijil Chenthamarakshan, and Nandakishore Kambhatla. 2010. Syntax based reordering with automatically derived rules for improved statistical machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics. Karthik Visweswariah, Rajakrishnan Rajkumar, Ankur Gandhe, Ananthakrishnan Ramanathan, and Jiri Navratil. 2011. A word reordering model for improved machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 486–496, Stroudsburg, PA, USA. Association for Computational Linguistics. Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of EMNLPCoNLL. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Comput. Linguist., 23(3):377–403, September. Fei Xia and Michael McCord. 2004. Improving a statistical MT system with automatically learned rewrite patterns. In COLING. Kenji Yamada and Kevin Knight. 2002. A decoder for syntax-based statistical MT. In Proceedings of ACL. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Machine Translation. 1284
2013
125
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1285–1293, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Vector Space Model for Adaptation in Statistical Machine Translation Boxing Chen, Roland Kuhn and George Foster National Research Council Canada fi[email protected] Abstract This paper proposes a new approach to domain adaptation in statistical machine translation (SMT) based on a vector space model (VSM). The general idea is first to create a vector profile for the in-domain development (“dev”) set. This profile might, for instance, be a vector with a dimensionality equal to the number of training subcorpora; each entry in the vector reflects the contribution of a particular subcorpus to all the phrase pairs that can be extracted from the dev set. Then, for each phrase pair extracted from the training data, we create a vector with features defined in the same way, and calculate its similarity score with the vector representing the dev set. Thus, we obtain a decoding feature whose value represents the phrase pair’s closeness to the dev. This is a simple, computationally cheap form of instance weighting for phrase pairs. Experiments on large scale NIST evaluation data show improvements over strong baselines: +1.8 BLEU on Arabic to English and +1.4 BLEU on Chinese to English over a non-adapted baseline, and significant improvements in most circumstances over baselines with linear mixture model adaptation. An informal analysis suggests that VSM adaptation may help in making a good choice among words with the same meaning, on the basis of style and genre. 1 Introduction The translation models of a statistical machine translation (SMT) system are trained on parallel data. Usage of language and therefore the best translation practice differs widely across genres, topics, and dialects, and even depends on a particular author’s or publication’s style; the word “domain” is often used to indicate a particular combination of all these factors. Unless there is a perfect match between the training data domain and the (test) domain in which the SMT system will be used, one can often get better performance by adapting the system to the test domain. Domain adaptation is an active topic in the natural language processing (NLP) research community. Its application to SMT systems has recently received considerable attention. Approaches that have been tried for SMT model adaptation include mixture models, transductive learning, data selection, instance weighting, and phrase sense disambiguation, etc. Research on mixture models has considered both linear and log-linear mixtures. Both were studied in (Foster and Kuhn, 2007), which concluded that the best approach was to combine submodels of the same type (for instance, several different TMs or several different LMs) linearly, while combining models of different types (for instance, a mixture TM with a mixture LM) loglinearly. (Koehn and Schroeder, 2007), instead, opted for combining the sub-models directly in the SMT log-linear framework. In transductive learning, an MT system trained on general domain data is used to translate indomain monolingual data. The resulting bilingual sentence pairs are then used as additional training data (Ueffing et al., 2007; Chen et al., 2008; Schwenk, 2008; Bertoldi and Federico, 2009). Data selection approaches (Zhao et al., 2004; Hildebrand et al., 2005; L¨u et al., 2007; Moore and Lewis, 2010; Axelrod et al., 2011) search for bilingual sentence pairs that are similar to the indomain “dev” data, then add them to the training data. Instance weighting approaches (Matsoukas et al., 2009; Foster et al., 2010; Huang and Xiang, 2010; Phillips and Brown, 2011; Sennrich, 2012) 1285 typically use a rich feature set to decide on weights for the training data, at the sentence or phrase pair level. For example, a sentence from a subcorpus whose domain is far from that of the dev set would typically receive a low weight, but sentences in this subcorpus that appear to be of a general nature might receive higher weights. The 2012 JHU workshop on Domain Adaptation for MT 1 proposed phrase sense disambiguation (PSD) for translation model adaptation. In this approach, the context of a phrase helps the system to find the appropriate translation. In this paper, we propose a new instance weighting approach to domain adaptation based on a vector space model (VSM). As in (Foster et al., 2010), this approach works at the level of phrase pairs. However, the VSM approach is simpler and more straightforward. Instead of using word-based features and a computationally expensive training procedure, we capture the distributional properties of each phrase pair directly, representing it as a vector in a space which also contains a representation of the dev set. The similarity between a given phrase pair’s vector and the dev set vector becomes a feature for the decoder. It rewards phrase pairs that are in some sense closer to those found in the dev set, and punishes the rest. In initial experiments, we tried three different similarity functions: Bhattacharyya coefficient, Jensen-Shannon divergency, and cosine measure. They all enabled VSM adaptation to beat the non-adaptive baseline, but Bhattacharyya similarity worked best, so we adopted it for the remaining experiments. The vector space used by VSM adaptation can be defined in various ways. In the experiments described below, we chose a definition that measures the contribution (to counts of a given phrase pair, or to counts of all phrase pairs in the dev set) of each training subcorpus. Thus, the variant of VSM adaptation tested here bears a superficial resemblance to domain adaptation based on mixture models for TMs, as in (Foster and Kuhn, 2007), in that both approaches rely on information about the subcorpora from which the data originate. However, a key difference is that in this paper we explicitly capture each phrase pair’s distribution across subcorpora, and compare it to the aggregated distribution of phrase pairs in the dev set. In mixture models, a phrase pair’s distribu1http://www.clsp.jhu.edu/workshops/archive/ws12/groups/dasmt tion across subcorpora is captured only implicitly, by probabilities that reflect the prevalence of the pair within each subcorpus. Thus, VSM adaptation occurs at a much finer granularity than mixture model adaptation. More fundamentally, there is nothing about the VSM idea that obliges us to define the vector space in terms of subcorpora. For instance, we could cluster the words in the source language into S clusters, and the words in the target language into T clusters. Then, treating the dev set and each phrase pair as a pair of bags of words (a source bag and a target bag) one could represent each as a vector of dimension S + T, with entries calculated from the counts associated with the S + T clusters (in a way similar to that described for phrase pairs below). The (dev, phrase pair) similarity would then be independent of the subcorpora. One can think of several other ways of defining the vector space that might yield even better results than those reported here. Thus, VSM adaptation is not limited to the variant of it that we tested in our experiments. 2 Vector space model adaptation Vector space models (VSMs) have been widely applied in many information retrieval and natural language processing applications. For instance, to compute the sense similarity between terms, many researchers extract features for each term from its context in a corpus, define a VSM and then apply similarity functions (Hindle, 1990; Lund and Burgess, 1996; Lin, 1998; Turney, 2001). In our experiments, we exploited the fact that the training data come from a set of subcorpora. For instance, the Chinese-English training data are made up of 14 subcorpora (see section 3 below). Suppose we have C subcorpora. The domain vector for a phrase-pair (f, e) is defined as V (f, e) =< w1(f, e), ...wi(f, e), ..., wC(f, e) >, (1) where wi(f, e) is a standard tf · idf weight, i.e. wi(f, e) = tfi (f, e) · idf (f, e) . (2) To avoid a bias towards longer corpora, we normalize the raw joint count ci(f, e) in the corpus si by dividing by the maximum raw count of any phrase pair extracted in the corpus si. Let 1286 tfi (f, e) = ci (f, e) max {ci (fj, ek) , (fj, ek) ∈si}. (3) The idf (f, e) is the inverse document frequency: a measure of whether the phrase-pair (f, e) is common or rare across all subcorpora. We use the standard formula: idf (f, e) = log  C df (f, e) + λ  , (4) where df(f, e) is the number of subcorpora that (f, e) appears in, and λ is an empirically determined smoothing term. For the in-domain dev set, we first run word alignment and phrases extracting in the usual way for the dev set, then sum the distribution of each phrase pair (fj, ek) extracted from the dev data across subcorpora to represent its domain information. The dev vector is thus V (dev) =< w1(dev), . . . , wC(dev) >, (5) where wi(dev) = j=J X j=0 k=K X k=0 cdev (fj, ek) wi(fj, ek) (6) J, K are the total numbers of source/target phrases extracted from the dev data respectively. cdev (fj, ek) is the joint count of phrase pair fj, ek found in the dev set. The vector can also be built with other features of the phrase pair. For instance, we could replace the raw joint count ci(f, e) in Equation 3 with the raw marginal count of phrase pairs (f, e). Therefore, even within the variant of VSM adaptation we focus on in this paper, where the definition of the vector space is based on the existence of subcorpora, one could utilize other definitions of the vectors of the similarity function than those we utilized in our experiments. 2.1 Vector similarity functions VSM uses the similarity score between the vector representing the in-domain dev set and the vector representing each phrase pair as a decoder feature. There are many similarity functions we could have employed for this purpose (Cha, 2007). We tested three commonly-used functions: the Bhattacharyya coefficient (BC) (Bhattacharyya, 1943; Kazama et al., 2010), the Jensen-Shannon divergence (JSD), and the cosine measure. According to (Cha, 2007), these belong to three different families of similarity functions: the Fidelity family, the Shannon’s entropy family, and the inner Product family respectively. It was BC similarity that yielded the best performance, and that we ended up using in subsequent experiments. To map the BC score onto a range from 0 to 1, we first normalize each weight in the vector by dividing it by the sum of the weights. Thus, we get the probability distribution of a phrase pair or the phrase pairs in the dev data across all subcorpora: pi(f, e) = wi(f, e) Pj=C j=1 wj(f, e) (7) pi(dev) = wi(dev) Pj=C j=1 wj(dev) (8) To further improve the similarity score, we apply absolute discounting smoothing when calculating the probability distributions pi(f, e). We subtract a discounting value α from the non-zero pi(f, e), and equally allocate the remaining probability mass to the zero probabilities. We carry out the same smoothing for the probability distributions pi(dev). The smoothing constant α is determined empirically on held-out data. The Bhattacharyya coefficient (BC) is defined as follows: BC(dev; f, e) = i=C X i=0 p pi(dev) · pi(f, e) (9) The other two similarity functions we also tested are JSD and cosine (Cos). They are defined as follows: JSD(dev; f, e) = (10) 1 2[ i=C X i=1 pi(dev) log 2pi(dev) pi(dev) + pi(f, e) + i=C X i=1 pi(f, e) log 2pi(f, e) pi(dev) + pi(f, e)] Cos(dev; f, e) = P i pi(dev) · pi (f, e) qP i p2 i (dev) qP i p2 i (f, e) (11) 1287 corpus # segs # en tok % genres fbis 250K 10.5M 3.7 nw financial 90K 2.5M 0.9 fin gale bc 79K 1.3M 0.5 bc gale bn 75K 1.8M 0.6 bn ng gale nw 25K 696K 0.2 nw gale wl 24K 596K 0.2 wl hkh 1.3M 39.5M 14.0 hans hkl 400K 9.3M 3.3 legal hkn 702K 16.6M 5.9 nw isi 558K 18.0M 6.4 nw lex&ne 1.3M 2.0M 0.7 lex other nw 146K 5.2M 1.8 nw sinorama 282K 10.0M 3.5 nw un 5.0M 164M 58.2 un TOTAL 10.1M 283M 100.0 (all) devtest tune 1,506 161K nw wl NIST06 1,664 189K nw bng NIST08 1,357 164K nw wl Table 1: NIST Chinese-English data. In the genres column: nw=newswire, bc=broadcast conversation, bn=broadcast news, wl=weblog, ng=newsgroup, un=UN proc., bng = bn & ng. 3 Experiments 3.1 Data setting We carried out experiments in two different settings, both involving data from NIST Open MT 2012.2 The first setting is based on data from the Chinese to English constrained track, comprising about 283 million English running words. We manually grouped the training data into 14 corpora according to genre and origin. Table 1 summarizes information about the training, development and test sets; we show the sizes of the training subcorpora in number of words as a percentage of all training data. Most training subcorpora consist of parallel sentence pairs. The isi and lex&ne corpora are exceptions: the former is extracted from comparable data, while the latter is a lexicon that includes many named entities. The development set (tune) was taken from the NIST 2005 evaluation set, augmented with some web-genre material reserved from other NIST corpora. The second setting uses NIST 2012 Arabic to English data, but excludes the UN data. There are about 47.8 million English running words in these 2http://www.nist.gov/itl/iad/mig/openmt12.cfm corpus # segs # en toks % gen gale bc 57K 1.6M 3.3 bc gale bn 45K 1.2M 2.5 bn gale ng 21K 491K 1.0 ng gale nw 17K 659K 1.4 nw gale wl 24K 590K 1.2 wl isi 1,124K 34.7M 72.6 nw other nw 224K 8.7M 18.2 nw TOTAL 1,512K 47.8M 100.0 (all) devtest NIST06 1,664 202K nwl NIST08 1,360 205K nwl NIST09 1,313 187K nwl Table 2: NIST Arabic-English data. In the gen (genres) column: nw=newswire, bc=broadcast conversation, bn=broadcast news, ng=newsgroup, wl=weblog, nwl = nw & wl. training data. We manually grouped the training data into 7 groups according to genre and origin. Table 2 summarizes information about the training, development and test sets. Note that for this language pair, the comparable isi data represent a large proportion of the training data: 72% of the English words. We use the evaluation sets from NIST 2006, 2008, and 2009 as our development set and two test sets, respectively. 3.2 System Experiments were carried out with an in-house phrase-based system similar to Moses (Koehn et al., 2007). Each corpus was word-aligned using IBM2, HMM, and IBM4 models, and the phrase table was the union of phrase pairs extracted from these separate alignments, with a length limit of 7. The translation model (TM) was smoothed in both directions with KN smoothing (Chen et al., 2011). We use the hierarchical lexicalized reordering model (RM) (Galley and Manning, 2008), with a distortion limit of 7. Other features include lexical weighting in both directions, word count, a distance-based RM, a 4-gram LM trained on the target side of the parallel data, and a 6-gram English Gigaword LM. The system was tuned with batch lattice MIRA (Cherry and Foster, 2012). 3.3 Results For the baseline, we simply concatenate all training data. We have also compared our approach to two widely used TM domain adaptation ap1288 proaches. One is the log-linear combination of TMs trained on each subcorpus (Koehn and Schroeder, 2007), with weights of each model tuned under minimal error rate training using MIRA. The other is a linear combination of TMs trained on each subcorpus, with the weights of each model learned with an EM algorithm to maximize the likelihood of joint empirical phrase pair counts for in-domain dev data. For details, refer to (Foster and Kuhn, 2007). The value of λ and α (see Eq 4 and Section 2.1) are determined by the performance on the dev set of the Arabic-to-English system. For both Arabic-to-English and Chinese-to-English experiment, these values obtained on Arabic dev were used to obtain the results below: λ was set to 8, and α was set to 0.01. (Later, we ran an experiment on Chinese-to-English with λ and α tuned specifically for that language pair, but the performance for the Chinese-English system only improved by a tiny, insignificant amount). Our metric is case-insensitive IBM BLEU (Papineni et al., 2002), which performs matching of n-grams up to n = 4; we report BLEU scores averaged across both test sets NIST06 and NIST08 for Chinese; NIST08 and NIST09 for Arabic. Following (Koehn, 2004), we use the bootstrapresampling test to do significance testing. In tables 3 to 5, * and ** denote significant gains over the baseline at p < 0.05 and p < 0.01 levels, respectively. We first compare the performance of different similarity functions: cosine (COS), JensenShannon divergence (JSD) and Bhattacharyya coefficient (BC). The results are shown in Table 3. All three functions obtained improvements. Both COS and BC yield statistically significant improvements over the baseline, with BC performing better than COS by a further statistically significant margin. The Bhattacharyya coefficient is explicitly designed to measure the overlap between the probability distributions of two statistical samples or populations, which is precisely what we are trying to do here: we are trying to reward phrase pairs whose distribution is similar to that of the dev set. Thus, its superior performance in these experiments is not unexpected. In the next set of experiments, we compared VSM adaptation using the BC similarity function with the baseline which concatenates all training data and with log-linear and linear TM mixtures system Chinese Arabic baseline 31.7 46.8 COS 32.3* 47.8** JSD 32.1 47.1 BC 33.0** 48.4** Table 3: Comparison of different similarity functions. * and ** denote significant gains over the baseline at p < 0.05 and p < 0.01 levels, respectively. system Chinese Arabic baseline 31.7 46.8 loglinear tm 28.4 44.5 linear tm 32.7** 47.5** vsm, BC 33.0** 48.4** Table 4: Results for variants of adaptation. whose components are based on subcorpora. Table 4 shows that log-linear combination performs worse than the baseline: the tuning algorithm failed to optimize the log-linear combination even on dev set. For Chinese, the BLEU score of the dev set on the baseline system is 27.3, while on the log-linear combination system, it is 24.0; for Arabic, the BLEU score of the dev set on the baseline system is 46.8, while on the log-linear combination system, it is 45.4. We also tried adding the global model to the loglinear combination and it didn’t improve over the baseline for either language pair. Linear mixture was significantly better than the baseline at the p < 0.01 level for both language pairs. Since our approach, VSM, performed better than the linear mixture for both pairs, it is of course also significantly better than the baseline at the p < 0.01 level. This raises the question: is VSM performance significantly better than that of a linear mixture of TMs? The answer (not shown in the table) is that for Arabic to English, VSM performance is better than linear mixture at the p < 0.01 level. For Chinese to English, the argument for the superiority of VSM over linear mixture is less convincing: there is significance at the p < 0.05 for one of the two test sets (NIST06) but not for the other (NIST08). At any rate, these results establish that VSM adaptation is clearly superior to linear mixture TM adaptation, for one of the two language pairs. In Table 4, the VSM results are based on the 1289 system Chinese Arabic baseline 31.7 46.8 linear tm 32.7** 47.5** vsm, joint 33.0** 48.4** vsm, src-marginal 32.2* 47.3* vsm, tgt-marginal 32.6** 47.6** vsm, src+tgt (2 feat.) 32.7** 48.2** vsm, joint+src (2 feat.) 32.9** 48.4** vsm, joint+tgt (2 feat.) 32.9** 48.4** vsm, joint+src+tgt (3 feat.) 33.1** 48.6** Table 5: Results for adaptation based on joint or maginal counts. vector of the joint counts of the phrase pair. In the next experiment, we replace the joint counts with the source or target marginal counts. In Table 5, we first show the results based on source and target marginal counts, then the results of using feature sets drawn from three decoder VSM features: a joint count feature, a source marginal count feature, and a target marginal count feature. For instance, the last row shows the results when all three features are used (with their weights tuned by MIRA). It looks as though the source and target marginal counts contain useful information. The best performance is obtained by combining all three sources of information. The 3-feature version of VSM yields +1.8 BLEU over the baseline for Arabic to English, and +1.4 BLEU for Chinese to English. When we compared two sets of results in Table 4, the joint count version of VSM and linear mixture of TMs, we found that for Arabic to English, VSM performance is better than linear mixture at the p < 0.01 level; the Chinese to English significance test was inconclusive (VSM found to be superior to linear mixture at p < 0.05 for NIST06 but not for NIST08). We now have somewhat better results for the 3-feature version of VSM shown in Table 5. How do these new results affect the VSM vs. linear mixture comparison? Naturally, the conclusions for Arabic don’t change. For Chinese, 3-feature VSM is now superior to linear mixture at p < 0.01 on NIST06 test set, but 3-feature VSM still doesn’t have a statistically significant edge over linear mixture on NIST08 test set. A fair summary would be that 3feature VSM adaptation is decisively superior to linear mixture adaptation for Arabic to English, and highly competitive with linear mixture adaptation for Chinese to English. Our last set of experiments examined the question: when added to a system that already has some form of linear mixture model adaptation, does VSM improve performance? In (Foster and Kuhn, 2007), two kinds of linear mixture were described: linear mixture of language models (LMs), and linear mixture of translation models (TMs). Some of the results reported above involved linear TM mixtures, but none of them involved linear LM mixtures. Table 6 shows the results of different combinations of VSM and mixture models. * and ** denote significant gains over the row no vsm at p < 0.05 and p < 0.01 levels, respectively. This means that in the table, the baseline within each box containing three results is the topmost result in the box. For instance, with an initial Chinese system that employs linear mixture LM adaptation (lin-lm) and has a BLEU of 32.1, adding 1-feature VSM adaptation (+vsm, joint) improves performance to 33.1 (improvement significant at p < 0.01), while adding 3-feature VSM instead (+vsm, 3 feat.) improves performance to 33.2 (also significant at p < 0.01). For Arabic, including either form of VSM adaptation always improves performance with significance at p < 0.01, even over a system including both linear TM and linear LM adaptation. For Chinese, adding VSM still always yields an improvement, but the improvement is not significant if linear TM adaptation is already in the system. These results show that combining VSM adaptation and either or both kinds of linear mixture adaptation never hurts performance, and often improves it by a significant amount. 3.4 Informal Data Analysis To get an intuition for how VSM adaptation improves BLEU scores, we compared outputs from the baseline and VSM-adapted system (“vsm, joint” in Table 5) on the Chinese test data. We focused on examples where the two systems had translated the same source-language (Chinese) phrase s differently, and where the target-language (English) translation of s chosen by the VSMadapted system, tV , had a higher Bhattacharyya score for similarity with the dev set than did the phrase that was chosen by the baseline system, tB. Thus, we ignored differences in the two translations that might have been due to the secondary effects of VSM adaptation (such as a different tar1290 no-lin-adap lin-lm lin-tm lin-lm+lin-tm no vsm 31.7 32.1 32.7 33.1 Chinese +vsm, joint 33.0** 33.1** 33.0 33.3 +vsm, 3 feat. 33.1** 33.2** 33.1 33.4 no vsm 46.8 47.0 47.5 47.7 Arabic +vsm, joint 48.4** 48.7** 48.6** 48.8** +vsm, 3 feat. 48.6** 48.8** 48.7** 48.9** Table 6: Results of combining VSM and linear mixture adaptation. “lin-lm” is linear language model adaptation, “lin-tm” is linear translation model adaptation. * and ** denote significant gains over the row “no vsm” at p < 0.05 and p < 0.01 levels, respectively. get phrase being preferred by the language model in the VSM-adapted system from the one preferred in the baseline system because of a Bhattacharyyamediated change in the phrase preceding it). An interesting pattern soon emerged: the VSMadapted system seems to be better than the baseline at choosing among synonyms in a way that is appropriate to the genre or style of a text. For instance, where the text to be translated is from an informal genre such as weblog, the VSM-adapted system will often pick an informal word where the baseline picks a formal word with the same or similar meaning, and vice versa where the text to be translated is from a more formal genre. To our surprise, we saw few examples where the VSMadapted system did a better job than the baseline of choosing between two words with different meaning, but we saw many examples where the VSMadapted system did a better job than the baseline of choosing between two words that both have the same meaning according to considerations of style and genre. Two examples are shown in Table 7. In the first example, the first two lines show that VSM finds that the Chinese-English phrase pair (殴打, assaulted) has a Bhattacharyya (BC) similarity of 0.556163 to the dev set, while the phrase pair (殴 打, beat) has a BC similarity of 0.780787 to the dev. In this situation, the VSM-adapted system thus prefers “beat” to “assaulted” as a translation for 殴打. The next four lines show the source sentence (SRC), the reference (REF), the baseline output (BSL), and the output of the VSM-adapted system. Note that the result of VSM adaptation is that the rather formal word “assaulted” is replaced by its informal near-synonym “beat” in the translation of an informal weblog text. “apprehend” might be preferable to “arrest” in a legal text. However, it looks as though the VSM-adapted system has learned from the dev that among synonyms, those more characteristic of news stories than of legal texts should be chosen: it therefore picks “arrest” over its synonym “apprehend”. What follows is a partial list of pairs of phrases (all single words) from our system’s outputs, where the baseline chose the first member of a pair and the VSM-adapted system chose the second member of the pair to translate the same Chinese phrase into English (because the second word yields a better BC score for the dev set we used). It will be seen that nearly all of the pairs involve synonyms or near-synonyms rather than words with radically different senses (one exception below is “center” vs “heart”). Instead, the differences between the two words tend to be related to genre or style: gunmen-mobsters, champion-star, updated-latest, caricatures-cartoons, spill-leakage, hiv-aids, inkling-clues, behaviour-actions, deceittrick, brazen-shameless, aristocratic-noble, circumvent-avoid, attack-criticized, descent-born, hasten-quickly, precipice-cliff, center-heart, blessing-approval, imminent-approaching, stormed-rushed, etc. 4 Conclusions and future work This paper proposed a new approach to domain adaptation in statistical machine translation, based on vector space models (VSMs). This approach measures the similarity between a vector representing a particular phrase pair in the phrase table and a vector representing the dev set, yielding a feature associated with that phrase pair that will be used by the decoder. The approach is simple, easy to implement, and computationally cheap. For the two language pairs we looked at, it provided a large performance improvement over a non-adaptive baseline, and also compared 1291 1 phrase 殴打↔assaulted (0.556163) pairs 殴打↔beat (0.780787) SRC ...那些殴打村民的地皮流氓... REF ... those local ruffians and hooligans who beat up villagers ... BSL ... those who assaulted the villagers land hooligans ... VSM ... hooligans who beat the villagers ... 2 phrase 缉拿↔apprehend (0.286533) pairs 缉拿↔arrest (0.603342) SRC ... 缉拿凶手并且将之绳之以法。 REF ... catch the killers and bring them to justice . BSL ... apprehend the perpetrators and bring them to justice . VSM ... arrest the perpetrators and bring them to justice . Table 7: Examples show that VSM chooses translations according to considerations of style and genre. favourably with linear mixture adaptation techniques. Furthermore, VSM adaptation can be exploited in a number of different ways, which we have only begun to explore. In our experiments, we based the vector space on subcorpora defined by the nature of the training data. This was done purely out of convenience: there are many, many ways to define a vector space in this situation. An obvious and appealing one, which we intend to try in future, is a vector space based on a bag-of-words topic model. A feature derived from this topicrelated vector space might complement some features derived from the subcorpora which we explored in the experiments above, and which seem to exploit information related to genre and style. References Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In EMNLP 2011. Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In Proceedings of the 4th Workshop on Statistical Machine Translation, Athens, March. WMT. A. Bhattacharyya. 1943. On a measure of divergence between two statistical populations defined by their probability distributions. Bulletin of the Calcutta Mathematical Society, 35:99–109. Sung-Hyuk Cha. 2007. Comprehensive survey on distance/similarity measures between probability density functions. International Journal of Mathematical Models ind Methods in Applied Sciences, 1(4):300–307. Boxing Chen, Min Zhang, Aiti Aw, and Haizhou Li. 2008. Exploiting n-best hypotheses for smt selfenhancement. In ACL 2008. Boxing Chen, Roland Kuhn, George Foster, and Howard Johnson. 2011. Unpacking and transforming feature functions: New ways to smooth phrase tables. In MT Summit 2011. Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In NAACL 2012. George Foster and Roland Kuhn. 2007. Mixturemodel adaptation for SMT. In Proceedings of the ACL Workshop on Statistical Machine Translation, Prague, June. WMT. George Foster, Cyril Goutte, and Roland Kuhn. 2010. Discriminative instance weighting for domain adaptation in statistical machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP), Boston. Michel Galley and C. D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In EMNLP 2008, pages 848–856, Hawaii, October. Almut Silja Hildebrand, Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Adaptation of the translation model for statistical machine translation based on information retrieval. In Proceedings of the 10th EAMT Conference, Budapest, May. Donald Hindle. 1990. Noun classification from predicate.argument structures. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics (ACL), pages 268–275, Pittsburgh, PA, June. ACL. Fei Huang and Bing Xiang. 2010. Feature-rich discriminative phrase rescoring for SMT. In COLING 2010. Jun’ichi Kazama, Stijn De Saeger, Kow Kuroda, Masaki Murata, and Kentaro Torisawa. 2010. A 1292 bayesian method for robust estimation of distributional similarities. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 247–256, Uppsala, Sweden, July. ACL. Philipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 224–227, Prague, Czech Republic, June. Association for Computational Linguistics. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL 2007, Demonstration Session. P. Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), Barcelona, Spain. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING/ACL98, pages 768–774, Montreal, Quebec, Canada. Yajuan L¨u, Jin Huang, and Qun Liu. 2007. Improving Statistical Machine Translation Performance by Training Data Selection and Optimization. In Proceedings of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP), Prague, Czech Republic. K. Lund and C. Burgess. 1996. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior Research Methods Instruments and Computers, 28(2):203–208. Spyros Matsoukas, Antti-Veikko I. Rosti, and Bing Zhang. 2009. Discriminative corpus weight estimation for machine translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), Singapore. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In ACL 2010. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318, Philadelphia, July. ACL. Aaron B. Phillips and Ralf D. Brown. 2011. Training machine translation with a second-order taylor approximation of weighted translation instances. In MT Summit 2011. Holger Schwenk. 2008. Investigations on largescale lightly-supervised training for statistical machine translation. In IWSLT 2008. Rico Sennrich. 2012. Perplexity minimization for translation model domain adaptation in statistical machine translation. In EACL 2012. Peter Turney. 2001. Mining the web for synonyms: Pmi-ir versus lsa on toefl. In Twelfth European Conference on Machine Learning, page 491–502, Berlin, Germany. Nicola Ueffing, Gholamreza Haffari, and Anoop Sarkar. 2007. Transductive learning for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), Prague, Czech Republic, June. ACL. Bing Zhao, Matthias Eck, and Stephan Vogel. 2004. Language model adaptation for statistical machine translation with structured query models. In Proceedings of the International Conference on Computational Linguistics (COLING) 2004, Geneva, August. 1293
2013
126
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1294–1303, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics From Natural Language Specifications to Program Input Parsers Tao Lei, Fan Long, Regina Barzilay, and Martin Rinard Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {taolei, fanl, regina, rinard}@csail.mit.edu Abstract We present a method for automatically generating input parsers from English specifications of input file formats. We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser. We model the problem as a joint dependency parsing and semantic role labeling task. Our method is based on two sources of information: (1) the correlation between the text and the specification tree and (2) noisy supervision as determined by the success of the generated C++ parser in reading input examples. Our results show that our approach achieves 80.0% F-Score accuracy compared to an F-Score of 66.7% produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing).1 1 Introduction The general problem of translating natural language specifications into executable code has been around since the field of computer science was founded. Early attempts to solve this problem produced what were essentially verbose, clumsy, and ultimately unsuccessful versions of standard formal programming languages. In recent years 1The code, data, and experimental setup for this research are available at http://groups.csail.mit.edu/rbg/code/nl2p the input a single integer T test cases an integer N the next N lines N characters The input contains a single integer T that indicates the number of test cases. Then follow the T cases. Each test case begins with a line contains an integer N, representing the size of wall. The next N lines represent the original wall. Each line contains N characters. The j-th character of the i-th line figures out the color ... (a) Text Specification: (b) Specification Tree: (c) Two Program Input Examples: 1 10 YYWYYWWWWW YWWWYWWWWW YYWYYWWWWW ... WWWWWWWWWW 2 1 Y 5 YWYWW ... WWYYY Figure 1: An example of (a) one natural language specification describing program input data; (b) the corresponding specification tree representing the program input structure; and (c) two input examples however, researchers have had success addressing specific aspects of this problem. Recent advances in this area include the successful translation of natural language commands into database queries (Wong and Mooney, 2007; Zettlemoyer and Collins, 2009; Poon and Domingos, 2009; Liang et al., 2011) and the successful mapping of natural language instructions into Windows command sequences (Branavan et al., 2009; Branavan et al., 2010). In this paper we explore a different aspect of this general problem: the translation of natural language input specifications into executable code that correctly parses the input data and generates 1294 data structures for holding the data. The need to automate this task arises because input format specifications are almost always described in natural languages, with these specifications then manually translated by a programmer into the code for reading the program inputs. Our method highlights potential to automate this translation, thereby eliminating the manual software development overhead. Consider the text specification in Figure 1a. If the desired parser is implemented in C++, it should create a C++ class whose instance objects hold the different fields of the input. For example, one of the fields of this class is an integer, i.e., “a single integer T” identified in the text specification in Figure 1a. Instead of directly generating code from the text specification, we first translate the specification into a specification tree (see Figure 1b), then map this tree into parser code (see Figure 2). We focus on the translation from the text specification to the specification tree.2 We assume that each text specification is accompanied by a set of input examples that the desired input parser is required to successfully read. In standard software development contexts, such input examples are usually available and are used to test the correctness of the input parser. Note that this source of supervision is noisy — the generated parser may still be incorrect even when it successfully reads all of the input examples. Specifically, the parser may interpret the input examples differently from the text specification. For example, the program input in Figure 1c can be interpreted simply as a list of strings. The parser may also fail to parse some correctly formatted input files not in the set of input examples. Therefore, our goal is to design a technique that can effectively learn from this weak supervision. We model our problem as a joint dependency parsing and role labeling task, assuming a Bayesian generative process. The distribution over the space of specification trees is informed by two sources of information: (1) the correlation between the text and the corresponding specification tree and (2) the success of the generated parser in reading input examples. Our method uses a joint probability distribution to take both of these sources of information into account, and uses a sampling framework for the inference of specifi2During the second step of the process, the specification tree is deterministically translated into code. 1 struct TestCaseType { 2 int N; 3 vector<NLinesType*> lstLines; 4 InputType* pParentLink; 5 } 6 7 struct InputType { 8 int T; 9 vector<TestCaseType*> lstTestCase; 10 } 11 12 TestCaseType* ReadTestCase(FILE * pStream, 13 InputType* pParentLink) { 14 TestCaseType* pTestCase 15 = new TestCaseType; 16 pTestCase→pParentLink = pParentLink; 17 18 ... 19 20 return pTestCase; 21 } 22 23 InputType* ReadInput(FILE * pStream) { 24 InputType* pInput = new InputType; 25 26 pInput→T = ReadInteger(pStream); 27 for (int i = 0; i < pInput→T; ++i) { 28 TestCaseType* pTestCase 29 = new TestCaseType; 30 pTestCase = ReadTestCase (pStream, 31 pInput); 32 pInput→lstTestCase.push back (pTestCase); 33 } 34 35 return pInput; 36 } Figure 2: Input parser code for reading input files specified in Figure 1. cation trees given text specifications. A specification tree is rejected in the sampling framework if the corresponding code fails to successfully read all of the input examples. The sampling framework also rejects the tree if the text/specification tree pair has low probability. We evaluate our method on a dataset of input specifications from ACM International Collegiate Programming Contests, along with the corresponding input examples. These specifications were written for human programmers with no intention of providing support for automated processing. However, when trained using the noisy supervision, our method achieves substantially more accurate translations than a state-of-the-art semantic parser (Clarke et al., 2010) (specifically, 80.0% in F-Score compared to an F-Score of 66.7%). The strength of our model in the face of such weak supervision is also highlighted by the fact that it retains an F-Score of 77% even when only one input example is provided for each input 1295 Your program is supposed to read the input from the standard input and write its output to the standard output. The first line of the input contains one integer N. N lines follow, the i-th of them contains two real numbers Xi, Yi separated by a single space - the coordinates of the i-th house. Each of the following lines contains four real numbers separated by a single space. These numbers are the coordinates of two different points (X1, Y1) and (X2, Y2), lying on the highway. (a) Text Specification: the input one integer N N lines the following lines Specification Tree: (b) two real numbers Xi, Yi four real numbers (c) Input := N Lines [size = N] FollowingLines [size = *] N := int Lines := Xi Yi Xi := float Yi := float Formal Input Grammar Definition: FollowingLines := F1 F2 F3 F4 F1 := float Figure 3: An example of generating input parser code from text: (a) a natural language input specification; (b) a specification tree representing the input format structure (we omit the background phrases in this tree in order to give a clear view of the input format structure); and (c) formal definition of the input format constructed from the specification tree, represented as a context-free grammar in Backus-Naur Form with additional size constraints. specification. 2 Related Work Learning Meaning Representation from Text Mapping sentences into structural meaning representations is an active and extensively studied task in NLP. Examples of meaning representations considered in prior research include logical forms based on database query (Tang and Mooney, 2000; Zettlemoyer and Collins, 2005; Kate and Mooney, 2007; Wong and Mooney, 2007; Poon and Domingos, 2009; Liang et al., 2011; Goldwasser et al., 2011), semantic frames (Das et al., 2010; Das and Smith, 2011) and database records (Chen and Mooney, 2008; Liang et al., 2009). Learning Semantics from Feedback Our approach is related to recent research on learning from indirect supervision. Examples include leveraging feedback available via responses from a virtual world (Branavan et al., 2009) or from executing predicted database queries (Chang et al., 2010; Clarke et al., 2010). While Branavan et al. (2009) formalize the task as a sequence of decisions and learns from local rewards in a Reinforcement Learning framework, our model learns to predict the whole structure at a time. Another difference is the way our model incorporates the noisy feedback. While previous approaches rely on the feedback to train a discriminative prediction model, our approach models a generative process to guide structure predictions when the feedback is noisy or unavailable. NLP in Software Engineering Researchers have recently developed a number of approaches that apply natural language processing techniques to software engineering problems. Examples include analyzing API documents to infer API library specifications (Zhong et al., 2009; Pandita et al., 2012) and analyzing code comments to detect concurrency bugs (Tan et al., 2007; Tan et al., 2011). This research analyzes natural language in documentation or comments to better understand existing application programs. Our mechanism, in contrast, automatically generates parser programs from natural language input format descriptions. 3 Problem Formulation The task of translating text specifications to input parsers consists of two steps, as shown in Figure 3. First, given a text specification describing an input format, we wish to infer a parse tree (which we call a specification tree) implied by the text. Second, we convert each specification tree into formal grammar of the input format (represented in Backus-Naur Form) and then generate code that reads the input into data structures. In this paper, we focus on the NLP techniques used in the first step, i.e., learning to infer the specification trees from text. The second step is achieved using a deterministic rule-based tool. 3 As input, we are given a set of text specifications w = {w1, · · · , wN}, where each wi is a text specification represented as a sequence of noun phrases {wi k}. We use UIUC shallow parser to preprocess each text specificaton into a sequence of the noun phrases.4 In addition, we are given a set of input examples for each wi. We use these examples to test the generated input parsers to re3Specifically, the specification tree is first translated into the grammar using a set of rules and seed words that identifies basic data types such as int. Our implementation then generates a top-down parser since the generated grammar is simple. In general, standard techniques such as Bison and Yacc (Johnson, 1979) can generate bottom-up parsers given such grammar. 4http://cogcomp.cs.illinois.edu/demo/shallowparse/?id=7 1296 ject incorrect predictions made by our probabilistic model. We formalize the learning problem as a dependency parsing and role labeling problem. Our model predicts specification trees t = {t1, · · · , tN} for the text specifications, where each specification tree ti is a dependency tree over noun phrases {wi k}. In general many program input formats are nested tree structures, in which the tree root denotes the entire chunk of program input data and each chunk (tree node) can be further divided into sub-chunks or primitive fields that appear in the program input (see Figure 3). Therefore our objective is to predict a dependency tree that correctly represents the structure of the program input. In addition, the role labeling problem is to assign a tag zi k to each noun phrase wi k in a specification tree, indicating whether the phrase is a key phrase or a background phrase. Key phrases are named entities that identify input fields or input chunks appear in the program input data, such as “the input” or “the following lines” in Figure 3b. In contrast, background phrases do not define input fields or chunks. These phrases are used to organize the document (e.g., “your program”) or to refer to key phrases described before (e.g., “each line”). 4 Model We use two kinds of information to bias our model: (1) the quality of the generated code as measured by its ability to read the given input examples and (2) the features over the observed text wi and the hidden specification tree ti (this is standard in traditional parsing problems). We combine these two kinds of information into a Bayesian generative model in which the code quality of the specification tree is captured by the prior probability P(t) and the feature observations are encoded in the likelihood probability P(w|t). The inference jointly optimizes these two factors: P(t|w) ∝P(t) · P(w|t). Modeling the Generative Process. We assume the generative model operates by first generating the model parameters from a set of Dirichlet distributions. The model then generates text specification trees. Finally, it generates natural language feature observations conditioned on the hidden specification trees. The generative process is described formally as follows: • Generating Model Parameters: For every pair of feature type f and phrase tag z, draw a multinomial distribution parameter θz f from a Dirichlet prior P(θz f). The multinomial parameters provide the probabilities of observing different feature values in the text. • Generating Specification Tree: For each text specification, draw a specification tree t from all possible trees over the sequence of noun phrases in this specification. We denote the probability of choosing a particular specification tree t as P(t). Intuitively, this distribution should assign high probability to good specification trees that can produce C++ code that reads all input examples without errors, we therefore define P(t) as follows:5 P(t) = 1 Z ·        1 the input parser of tree t reads all input examples without error ϵ otherwise where Z is a normalization factor and ϵ is empirically set to 10−6. In other words, P(·) treats all specification trees that pass the input example test as equally probable candidates and inhibits the model from generating trees which fail the test. Note that we do not know this distribution a priori until the specification trees are evaluated by testing the corresponding C++ code. Because it is intractable to test all possible trees and all possible generated code for a text specification, we never explicitly compute the normalization factor 1/Z of this distribution. We therefore use sampling methods to tackle this problem during inference. • Generating Features: The final step generates lexical and contextual features for each tree. For each phrase wk associated with tag zk, let wp be its parent phrase in the tree and ws be the non-background sibling phrase to its left in the tree. The model generates the corresponding set of features φ(wp, ws, wk) for each text phrase tuple (wp, ws, wk), with 5When input examples are not available, P(t) is just uniform distribution. 1297 probability P(φ(wp, ws, wk)). We assume that each feature fj is generated independently: P(w|t) = P(φ(wp, ws, wk)) = Y fj∈φ(wp,ws,wk) θzk fj where θzk fj is the j-th component in the multinomial distribution θzk f denoting the probability of observing a feature fj associated with noun phrase wk labeled with tag zk. We define a range of features that capture the correspondence between the input format and its description in natural language. For example, at the unigram level we aim to capture that noun phrases containing specific words such as “cases” and “lines” may be key phrases (correspond to data chunks appear in the input), and that verbs such as “contain” may indicate that the next noun phrase is a key phrase. The full joint probability of a set w of N specifications and hidden text specification trees t is defined as: P(θ, t, w) = P(θ) N Y i=1 P(ti)P(wi|ti, θ) = P(θ) N Y i=1 P(ti) Y k P(φ(wi p, wi s, wi k)). Learning the Model During inference, we want to estimate the hidden specification trees t given the observed natural language specifications w, after integrating the model parameters out, i.e. t ∼P(t|w) = Z θ P(t, θ|w)dθ. We use Gibbs sampling to sample variables t from this distribution. In general, the Gibbs sampling algorithm randomly initializes the variables and then iteratively solves one subproblem at a time. The subproblem is to sample only one variable conditioned on the current values of all other variables. In our case, we sample one hidden specification tree ti while holding all other trees t−i fixed: ti ∼P(ti|w, t−i) (1) where t−i = (t1, · · · , ti−1, ti+1, · · · , tN). However directly solving the subproblem (1) in our case is still hard, we therefore use a Metropolis-Hastings sampler that is similarly applied in traditional sentence parsing problems. Specifically, the Hastings sampler approximates (1) by first drawing a new ti′ from a tractable proposal distribution Q instead of P(ti|w, t−i). We choose Q to be: Q(ti′|θ′, wi) ∝P(wi|ti′, θ′). (2) Then the probability of accepting the new sample is determined using the typical Metropolis Hastings process. Specifically, ti′ will be accepted to replace the last ti with probability: R(ti, ti′) = min ( 1, P(ti′|w, t−i) Q(ti|θ′, wi) P(ti|w, t−i) Q(ti′|θ′, wi) ) = min ( 1, P(ti′, t−i, w)P(wi|ti, θ′) P(ti, t−i, w)P(wi|ti′, θ′) ) , in which the normalization factors 1/Z are cancelled out. We choose θ′ to be the parameter expectation based on the current observations, i.e. θ′ = E  θ|w, t−i , so that the proposal distribution is close to the true distribution. This sampling algorithm with a changing proposal distribution has been shown to work well in practice (Johnson and Griffiths, 2007; Cohn et al., 2010; Naseem and Barzilay, 2011). The algorithm pseudo code is shown in Algorithm 1. To sample from the proposal distribution (2) efficiently, we implement a dynamic programming algorithm which calculates marginal probabilities of all subtrees. The algorithm works similarly to the inside algorithm (Baker, 1979), except that we do not assume the tree is binary. We therefore perform one additional dynamic programming step that sums over all possible segmentations of each span. Once the algorithm obtains the marginal probabilities of all subtrees, a specification tree can be drawn recursively in a top-down manner. Calculating P(t, w) in R(t, t′) requires integrating the parameters θ out. This has a closed form due to the Dirichlet-multinomial conjugacy: P(t, w) = P(t) · Z θ P(w|t, θ)P(θ)dθ ∝P(t) · Y Beta (count(f) + α) . Here α are the Dirichlet hyper parameters and count(f) are the feature counts observed in data (t, w). The closed form is a product of the Beta functions of each feature type. 1298 Feature Type Description Feature Value Word each word in noun phrase wk lines, VAR Verb verbs in noun phrase wk and the verb phrase before wk contains Distance sentence distance between wk and its parent phrase wp 1 Coreference wk share duplicate nouns or variable names with wp or ws True Table 1: Example of feature types and values. To deal with sparsity, we map variable names such as “N” and “X” into a category word “VAR” in word features. Input: Set of text specification documents w = {w1, · · · , wN}, Number of iterations T Randomly initialize specification trees 1 t = {t1, · · · , tN} for iter = 1 · · · T do 2 Sample tree ti for i-th document: 3 for i = 1 · · · N do 4 Estimate model parameters: 5 θ′ = E  θ′|w, t−i 6 Sample a new specification tree from distribution 7 Q: t′ ∼Q(t′|θ′, wi) 8 Generate and test code, and return feedback: 9 f ′ = CodeGenerator(wi, t′) 10 Calculate accept probability r: 11 r = R(ti, t′) 12 Accept the new tree with probability r: 13 With probability r : ti = t′ 14 end 15 end 16 Produce final structures: 17 return { ti if ti gets positive feedback } 18 Algorithm 1: The sampling framework for learning the model. Model Implementation: We define several types of features to capture the correlation between the hidden structure and its expression in natural language. For example, verb features are introduced because certain preceding verbs such as “contains” and “consists” are good indicators of key phrases. There are 991 unique features in total in our experiments. Examples of features appear in Table 1. We use a small set of 8 seed words to bias the search space. Specifically, we require each leaf key phrase to contain at least one seed word that identifies the C++ primitive data type (such as “integer”, “float”, “byte” and “string”). We also encourage a phrase containing the word “input” to be the root of the tree (for example, “the input file”) and each coreference phrase to be a Total # of words 7330 Total # of noun phrases 1829 Vocabulary size 781 Avg. # of words per sentence 17.29 Avg. # of noun phrase per document 17.26 Avg. # of possible trees per document 52K Median # of possible trees per document 79 Min # of possible trees per document 1 Max # of possible trees per document 2M Table 2: Statistics for 106 ICPC specifications. background phrase (for example, “each test case” after mentioning “test cases”), by initially adding pseudo counts to Dirichlet priors. 5 Experimental Setup Datasets: Our dataset consists of problem descriptions from ACM International Collegiate Programming Contests.6 We collected 106 problems from ACM-ICPC training websites.7 From each problem description, we extracted the portion that provides input specifications. Because the test input examples are not publicly available on the ACM-ICPC training websites, for each specification, we wrote simple programs to generate 100 random input examples. Table 2 presents statistics for the text specification set. The data set consists of 424 sentences, where an average sentence contains 17.3 words. The data set contains 781 unique words. The length of each text specification varies from a single sentence to eight sentences. The difference between the average and median number of trees is large. This is because half of the specifications are relatively simple and have a small number of possible trees, while a few difficult specifications have over thousands of possible trees (as the number of trees grows exponentially when the text length increases). Evaluation Metrics: We evaluate the model 6Official Website: http://cm.baylor.edu/welcome.icpc 7PKU Online Judge: http://poj.org/; UVA Online Judge: http://uva.onlinejudge.org/ 1299 performance in terms of its success in generating a formal grammar that correctly represents the input format (see Figure 3c). As a gold annotation, we construct formal grammars for all text specifications. Our results are generated by automatically comparing the machine-generated grammars with their golden counterparts. If the formal grammar is correct, then the generated C++ parser will correctly read the input file into corresponding C++ data structures. We use Recall and Precision as evaluation measures: Recall = # correct structures # text specifications Precision = # correct structures # produced structures where the produced structures are the positive structures returned by our framework whose corresponding code successfully reads all input examples (see Algorithm 1 line 18). Note the number of produced structures may be less than the number of text specifications, because structures that fail the input test are not returned. Baselines: To evaluate the performance of our model, we compare against four baselines. The No Learning baseline is a variant of our model that selects a specification tree without learning feature correspondence. It continues sampling a specification tree for each text specification until it finds one which successfully reads all of the input examples. The second baseline Aggressive is a state-ofthe-art semantic parsing framework (Clarke et al., 2010).8 The framework repeatedly predicts hidden structures (specification trees in our case) using a structure learner, and trains the structure learner based on the execution feedback of its predictions. Specifically, at each iteration the structure learner predicts the most plausible specification tree for each text document: ti = argmaxt f(wi, t). Depending on whether the corresponding code reads all input examples successfully or not, the (wi, ti) pairs are added as an positive or negative sample to populate a training set. After each iteration the structure learner is re-trained with the training samples to improve the prediction accuracy. In our experiment, we follow (Clarke et al., 8We take the name Aggressive from this paper. Model Recall Precision F-Score No Learning 52.0 57.2 54.5 Aggressive 63.2 70.5 66.7 Full Model 72.5 89.3 80.0 Full Model (Oracle) 72.5 100.0 84.1 Aggressive (Oracle) 80.2 100.0 89.0 Table 3: Average % Recall and % Precision of our model and all baselines over 20 independent runs. 2010) and choose a structural Support Vector Machine SVMstruct 9 as the structure learner. The remaining baselines provide an upper bound on the performance of our model. The baseline Full Model (Oracle) is the same as our full model except that the feedback comes from an oracle which tells whether the specification tree is correct or not. We use this oracle information in the prior P(t) same as we use the noisy feedback. Similarly the baseline Aggressive (Oracle) is the Aggressive baseline with access to the oracle. Experimental Details: Because no human annotation is required for learning, we train our model and all baselines on all 106 ICPC text specifications (similar to unsupervised learning). We report results averaged over 20 independent runs. For each of these runs, the model and all baselines run 100 iterations. For baseline Aggressive, in each iteration the SVM structure learner predicts one tree with the highest score for each text specification. If two different specification trees of the same text specification get positive feedback, we take the one generated in later iteration for evaluation. 6 Experimental Results Comparison with Baselines Table 3 presents the performance of various models in predicting correct specification trees. As can be seen, our model achieves an F-Score of 80%. Our model therefore significantly outperforms the No Learning baseline (by more than 25%). Note that the No Learning baseline achieves a low Precision of 57.2%. This low precision reflects the noisiness of the weak supervision - nearly one half of the parsers produced by No Learning are actually incorrect even though they read all of the input examples without error. This comparison shows the importance of capturing correlations between the specification trees and their text descriptions. 9www.cs.cornell.edu/people/tj/svm light/svm struct.html 1300 (a) The next N lines of the input file contain the Cartesian coordinates of watchtowers, one pair of coordinates per line. (b) The input contains several testcases. Each is specified by two strings S, T of alphanumeric ASCII characters Figure 4: Examples of dependencies and key phrases predicted by our model. Green marks correct key phrases and dependencies and red marks incorrect ones. The missing key phrases are marked in gray. %supervision Figure 5: Precision and Recall of our model by varying the percentage of weak supervision. The green lines are the performance of Aggressive baseline trained with full weak supervision. Because our model learns correlations via feature representations, it produces substantially more accurate translations. While both the Full Model and Aggressive baseline use the same source of feedback, they capitalize on it in a different way. The baseline uses the noisy feedback to train features capturing the correlation between trees and text. Our model, in contrast, combines these two sources of information in a complementary fashion. This combination allows our model to filter false positive feedback and produce 13% more correct translations than the Aggressive baseline. Clean versus Noisy Supervision To assess the impact of noise on model accuracy, we compare the Full Model against the Full Model (Oracle). The two versions achieve very close performance (80% v.s 84% in F-Score), even though Full Model is trained with noisy feedback. This demonstrates the strength of our model in learning from such weak supervision. Interestingly, Aggressive (Oracle) outperforms our oracle model by a 5% margin. This result shows that when the supervision is reliable, the generative assumption limits our model’s ability to gain the same performance improvement as discriminative models. #input examples Figure 6: Precision and Recall of our model by varying the number of available input examples per text specification. Impact of Input Examples Our model can also be trained in a fully unsupervised or a semisupervised fashion. In real cases, it may not be possible to obtain input examples for all text specifications. We evaluate such cases by varying the amount of supervision, i.e. how many text specifications are paired with input examples. In each run, we randomly select text specifications and only these selected specifications have access to input examples. Figure 5 gives the performance of our model with 0% supervision (totally unsupervised) to 100% supervision (our full model). With much less supervision, our model is still able to achieve performance comparable with the Aggressive baseline. We also evaluate how the number of provided input examples influences the performance of the model. Figure 6 indicates that the performance is largely insensitive to the number of input examples — once the model is given even one input example, its performance is close to the best performance it obtains with 100 input examples. We attribute this phenomenon to the fact that if the generated code is incorrect, it is unlikely to successfully parse any input. Case Study Finally, we consider some text specifications that our model does not correctly trans1301 late. In Figure 4a, the program input is interpreted as a list of character strings, while the correct interpretation is that the input is a list of string pairs. Note that both interpretations produce C++ input parsers that successfully read all of the input examples. One possible way to resolve this problem is to add other features such as syntactic dependencies between words to capture more language phenomena. In Figure 4b, the missing key phrase is not identified because our model is not able to ground the meaning of “pair of coordinates” to two integers. Possible future extensions to our model include using lexicon learning methods for mapping words to C++ primitive types for example “coordinates” to ⟨int, int⟩. 7 Conclusion It is standard practice to write English language specifications for input formats. Programmers read the specifications, then develop source code that parses inputs in the format. Known disadvantages of this approach include development cost, parsers that contain errors, specification misunderstandings, and specifications that become out of date as the implementation evolves. Our results show that taking both the correlation between the text and the specification tree and the success of the generated C++ parser in reading input examples into account enables our method to correctly generate C++ parsers for 72.5% of our natural language specifications. 8 Acknowledgements The authors acknowledge the support of Battelle Memorial Institute (PO #300662) and the NSF (Grant IIS-0835652). Thanks to Mirella Lapata, members of the MIT NLP group and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. References James K. Baker. 1979. Trainable grammars for speech recognition. In DH Klatt and JJ Wolf, editors, Speech Communication Papers for the 97th Meeting of the Acoustical Society of America, pages 547– 550. S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. S.R.K Branavan, Luke Zettlemoyer, and Regina Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Proceedings of ACL, pages 1268–1277. Mingwei Chang, Vivek Srikumar, Dan Goldwasser, and Dan Roth. 2010. Structured output learning with indirect supervision. In Proceedings of the 27th International Conference on Machine Learning. David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In Proceedings of 25th International Conference on Machine Learning (ICML-2008). James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Trevor Cohn, Phil Blunsom, and Sharon Goldwater. 2010. Inducing tree-substitution grammars. Journal of Machine Learning Research, 11. Dipanjan Das and Noah A. Smith. 2011. Semisupervised frame-semantic parsing for unknown predicates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1435– 1444. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith. 2010. Probabilistic frame-semantic parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 948–956. Dan Goldwasser, Roi Reichart, James Clarke, and Dan Roth. 2011. Confidence driven unsupervised semantic parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11. Mark Johnson and Thomas L. Griffiths. 2007. Bayesian inference for pcfgs via markov chain monte carlo. In Proceedings of the North American Conference on Computational Linguistics (NAACL ’07). Stephen C. Johnson. 1979. Yacc: Yet another compiler-compiler. Unix Programmer’s Manual, vol 2b. Rohit J. Kate and Raymond J. Mooney. 2007. Learning language semantics from ambiguous supervision. In Proceedings of the 22nd national conference on Artificial intelligence - Volume 1, AAAI’07. 1302 P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Tahira Naseem and Regina Barzilay. 2011. Using semantic cues to learn syntax. In Proceedings of the 25th National Conference on Artificial Intelligence (AAAI). Rahul Pandita, Xusheng Xiao, Hao Zhong, Tao Xie, Stephen Oney, and Amit Paradkar. 2012. Inferring method specifications from natural language api descriptions. In Proceedings of the 2012 International Conference on Software Engineering, ICSE 2012, pages 815–825, Piscataway, NJ, USA. IEEE Press. Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1, EMNLP ’09. Lin Tan, Ding Yuan, Gopal Krishna, and Yuanyuan Zhou. 2007. /* iComment: Bugs or bad comments? */. In Proceedings of the 21st ACM Symposium on Operating Systems Principles (SOSP07), October. Lin Tan, Yuanyuan Zhou, and Yoann Padioleau. 2011. aComment: Mining annotations from comments and code to detect interrupt-related concurrency bugs. In Proceedings of the 33rd International Conference on Software Engineering (ICSE11), May. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: integrating statistical and relational learning for semantic parsing. In Proceedings of the conference on Empirical Methods in Natural Language Processing, EMNLP ’00. Yuk Wah Wong and Raymond J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In ACL. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of UAI, pages 658–666. Luke S. Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Hao Zhong, Lu Zhang, Tao Xie, and Hong Mei. 2009. Inferring resource specifications from natural language api documentation. In Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering, ASE ’09, pages 307– 318, Washington, DC, USA. IEEE Computer Society. 1303
2013
127
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1304–1311, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Entity Linking for Tweets Xiaohua Liu†, Yitong Li‡, Haocheng Wu♯, Ming Zhou†, Furu Wei†, Yi Lu§ †Microsoft Research Asia, Beijing, 100190, China ‡School of Electronic and Information Engineering Beihang University, Beijing, 100191, China ♯University of Science and Technology of China No. 96, Jinzhai Road, Hefei, Anhui, China §School of Computer Science and Technology Harbin Institute of Technology, Harbin, 150001, China †{xiaoliu, mingzhou, fuwei}@microsoft.com ‡[email protected][email protected] §[email protected] Abstract We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method. 1 Introduction Twitter is a widely used social networking service. With millions of active users and hundreds of millions of new published tweets every day1, it has become a popular platform to capture and transmit the human experiences of the moment. Many tweet related researches are inspired, from named entity recognition (Liu et al., 2012), topic detection (Mathioudakis and Koudas, 2010), clustering (Rosa et al., 2010), to event extraction (Grinev et al., 2009). In this work, we study the entity linking task for tweets, which maps each entity mention in a tweet to a unique entity, i.e., an entry ID of a knowledge base like Wikipedia. Entity 1http://siteanalytics.compete.com/twitter.com/ linking task is generally considered as a bridge between unstructured text and structured machinereadable knowledge base, and represents a critical role in machine reading program (Singh et al., 2011). Entity linking for tweets is particularly meaningful, considering that tweets are often hard to read owing to its informal written style and length limitation of 140 characters. Current entity linking methods are built on top of a large scale knowledge base such as Wikipedia. A knowledge base consists of a set of entities, and each entity can have a variation list2. To decide which entity should be mapped, they may compute: 1) the similarity between the context of a mention, e.g., a text window around the mention, and the content of an entity, e.g., the entity page of Wikipedia (Mihalcea and Csomai, 2007; Han and Zhao, 2009); 2) the coherence among the mapped entities for a set of related mentions, e.g, multiple mentions in a document (Milne and Witten, 2008; Kulkarni et al., 2009; Han and Zhao, 2010; Han et al., 2011). Tweets pose special challenges to entity linking. First, a tweet is often too concise and too noisy to provide enough information for similarity computing, owing to its short and grass root nature. Second, tweets have rich variations of named entities3, and many of them fall out of the scope of the existing dictionaries mined from Wikipedia (called OOV mentions hereafter). On 2Entity variation lists can be extracted from the entity resolution pages of Wikipedia. For example, the link “http://en.wikipedia.org/wiki/Svm” will lead us to a resolution page, where “Svm” are linked to entities like “Space vector modulation” and “Support vector machine”. As a result, “Svm” will be added into the variation lists of “Space vector modulation” and “Support vector machine” , respectively. 3According to Liu et al. (2012), on average a named entity has 3.3 different surface forms in tweets. 1304 the other hand, the huge redundancy in tweets offers opportunities. That means, an entity mention often occurs in many tweets, which allows us to aggregate all related tweets to compute mention-mention similarity and mentionentity similarity. We propose a collective inference method that leverages tweet redundancy to address those two challenges. Given a set of mentions, our model tries to ensure that similar mentions are linked to similar entities while pursuing the high total similarity between matched mentionentity pairs. More specifically, we define local features, including context similarity and edit distance, to model the similarity between a mention and an entity. We adopt in-link based similarity (Milne and Witten, 2008), to measure the similarity between entities. Finally, we introduce a set of features to compute the similarity between mentions, including how similar the tweets containing the mentions are, whether they come from the tweets of the same account, and their edit distance. Notably, our model can resolve OOV mentions with the help of their similar mentions. For example, for the OOV mention “LukeBryanOnline”, our model can find similar mentions like “TheLukeBryan” and “LukeBryan”. Considering that most of its similar mentions are mapped to the American country singer “Luke Bryan”, our model tends to link “LukeBryanOnline” to the same entity. We evaluate our method on the public available data set shared by Meij et al. (2012)4. Experimental results show that our method outperforms two baselines, i.e., Wikify! (Mihalcea and Csomai, 2007) and system proposed by Meij et al. (2012). We also study the effectiveness of features related to each kind of similarity, and demonstrate the advantage of our method for OOV mention linkage. We summarize our contributions as follows. 1. We introduce a novel collective inference method that integrates three kinds of similarities, i.e., mention-entity similarity, entity-entity similarity, and mention-mention similarity, to simultaneously map a set of tweet mentions to their proper entities. 2. We propose modeling the mention-mention similarity and demonstrate its effectiveness 4http://ilps.science.uva.nl/resources/wsdm2012-addingsemantics-to-microblog-posts/ in entity linking for tweets, particularly for OOV mentions. 3. We evaluate our method on a public data set, and show our method compares favorably with the baselines. Our paper is organized as follows. In the next section, we introduce related work. In Section 3, we give the formal definition of the task. In Section 4, we present our solution, including the framework, features related to different kinds of similarities, and the training and decoding procedures. We evaluate our method in Section 5. Finally in Section 6, we conclude with suggestions of future work. 2 Related Work Existing entity linking work can roughly be divided into two categories. Methods of the first category resolve one mention at each time, and mainly consider the similarity between a mention-entity pair. In contrast, methods of the second category take a set of related mentions (e.g., mentions in the same document) as input, and figure out their corresponding entities simultaneously. Examples of the first category include the first Web-scale entity linking system SemTag (Dill et al., 2003), Wikify! (Mihalcea and Csomai, 2007), and the recent work of Milne and Witten (2008). SemTag uses the TAP knowledge base5, and employs the cosine similarity with TF-IDF weighting scheme to compute the match degree between a mention and an entity, achieving an accuracy of around 82%. Wikify! identifies the important concepts in the text and automatically links these concepts to the corresponding Wikipedia pages. It introduces two approaches to define mention-entity similarity, i.e., the contextual overlap between the paragraph where the mention occurs and the corresponding Wikipedia pages, and a Naive Bayes classifier that predicts whether a mention should be linked to an entity. It achieves 80.69% F1 when two approaches are combined. Milne and Witten work on the same task of Wikify!, and also train a classifier. However, they cleverly use the 5TAB (http://www.w3.org/2002/05/tap/) is a shallow knowledge base that contains a broad range of lexical and taxonomic information about popular objects like music, movies, authors, sports, autos, health, etc. 1305 links found within Wikipedia articles for training, exploiting the fact that for every link, a Wikipedian has manually selected the correct destination to represent the intended sense of the anchor. Their method achieves an F1 score of 75.0%. Representative studies of the second category include the work of Kulkarni et al. (2009), Han et al. (2011), and Shen et al. (2012). One common feature of these studies is that they leverage the global coherence between entities. Kulkarni et al. (2009) propose a graphical model that explicitly models the combination of evidence from local mentionentity compatibility and global document-level topical coherence of the entities, and show that considering global coherence between entities significantly improves the performance. Han et al. (2011) introduce a graph-based representation, called Referent Graph, to model the global interdependence between different entity linking decisions, and jointly infer the referent entities of all name mentions in a document by exploiting the interdependence captured in Referent Graph. Shen et al. (2012) propose LIEGE, a framework to link the entities in web lists with the knowledge base, with the assumption that entities mentioned in a Web list tend to be a collection of entities of the same conceptual type. Most work of entity linking focuses on web pages. Recently, Meij et al. (2012) study this task for tweets. They propose a machine learning based approach using n-gram features, concept features, and tweet features, to identify concepts semantically related to a tweet, and for every entity mention to generate links to its corresponding Wikipedia article. Their method belongs to the first category, in the sense that they only consider the similarity between mention (tweet) and entity (Wikipedia article). Our method belongs to the second category. However, in contrast with existing collective approaches, our method works on tweets which are short and often noisy. Furthermore, our method is based on the “similar mention with similar entity” assumption, and explicitly models and integrates the mention similarity into the optimization framework. Compared with Meij et al. (2012), our method is collective, and integrates more features. 3 Task Definition Given a sequence of mentions, denoted by ⃗M = (m1, m2, · · · , mn), our task is to output a sequence of entities, denoted by ⃗E = (e1, e2, · · · , en), where ei is the entity corresponding to mi. Here, an entity refers to an item of a knowledge base. Following most existing work, we use Wikipedia as the knowledge base, and an entity is a definition page in Wikipedia; a mention denotes a sequence of tokens in a tweet that can be potentially linked to an entity. Several notes should be made. First, we assume that mentions are given, e.g., identified by some named entity recognition system. Second, mentions may come from multiple tweets. Third, mentions with the same token sequence may refer to different entities, depending on mention context. Finally, we assume each entity e has a variation list6, and a unique ID through which all related information about that entity can be accessed. Here is an example to illustrate the task. Given mentions “nbcbightlynews”, “Santiago”, “WH” and “Libya” from the following tweet “Chuck Todd: Prepping for @nbcnightlynews here in Santiago, reporting on WH handling of Libya situation.”, the expected output is “NBC Nightly News(194735)”, “Santiago Chile(51572)”, “White House(33057)” and “Libya(17633)”, where the numbers in the parentheses are the IDs of the corresponding entities. 4 Our Method In this section, we first present the framework of our entity linking method. Then we introduce features related to different kinds of similarities, followed by a detailed discussion of the training and decoding procedures. 4.1 Framework Given the input mention sequence ⃗M = (m1, m2, · · · , mn), our method outputs the entity sequence ⃗E∗ = (e∗ 1, e∗ 2, · · · , e∗ n) according to Formula 1: 6For example, the variation list of the entity “Obama” may contain “Barack Obama”, “Barack Hussein Obama II”, etc. 1306 ⃗E∗= argmax∀⃗E∈C( ⃗ M)λ n ∑ i=1 ⃗w · ⃗f(ei, mi) +(1 −λ) ∑ i̸=j r(ei, ej)s(mi, mj) (1) Where: • C( ⃗M) is the set of all possible entity sequences for the mention sequence ⃗M; • ⃗E denotes an entity sequence instance, consisting of e1, e2, · · · , en; • ⃗f(ei, mi) is the feature vector that models the similarity between mention mi and its linked entity ei; • ⃗w is the feature weight vector related to ⃗f, which is trained on the training data set; ⃗w · ⃗f(ei, mi) is the similarity between mention mi and entity ei; • r(ei, ej) is the function that returns the similarity between two entities ei and ej; • s(mi, mj) is the function that returns the similarity between two mentions mi and mj; • λ ∈(0, 1) is a systematic parameter, which is determined on the development data set; it is used to adjust the tradeoff between local compatibility and global consistence. It is experimentally set to 0.8 in our work. From Formula 1, we can see that: 1) our method considers the mention-entity similarly, entity-entity similarity and mention-mention similarity. Mention-entity similarly is used to model local compatibility, while entity-entity similarity and mention-mention similarity combined are to model global consistence; and 2) our method prefers configurations where similar mentions have similar entities and with high local compatibility. C( ⃗M) is worth of more discussion here. It represents the search space, which can be generated using the entity variation list. To achieve this, we first build an inverted index of all entity variation lists, with each unique variation as an entry pointing to a list of entities. Then for any mention m, we look up the index, and get all possible entities, denoted by C(m). In this way, given a mention sequence ⃗M = (m1, m2, · · · , mn), we can enumerate all possible entity sequence ⃗E = (e1, e2, · · · , en), where ei ∈ C(m). This means |C( ⃗M)| = ∏ m∈M |C(m)| , which is often large. There is one special case: if m is an OOV mention, i.e., |C(m)| = 0, then |C( ⃗M)| = 0, and we get no solution. To address this problem, we can generate a list of candidates for an OOV mention using its similar mentions. Let S(m) denote OOV mention m’s similar mentions, we define C(m) = ∪ m′∈S(m) C(m ′). If still C(m) = 0, we remove m from ⃗M, and report we cannot map it to any entity. Here is an example to illustrate our framework. Suppose we have the following tweets: • UserA: Yeaaahhgg #habemusfut.. I love monday night futbol =) #EnglishPremierLeague ManU vs Liverpool1 • UserA: Manchester United 3 - Liverpool2 2 #EnglishPremierLeague GLORY, GLORY, MAN.UNITED! • · · · Figure 1: An illustrative example to show our framework. Ovals in orange and in blue represent mentions and entities, respectively. Each mention pair, entity pair, and mention entity pair have a similarity score represented by s, r and f, respectively. We need find out the best entity sequence ⃗E∗ for mentions ⃗M = { “Liverpool1”, “Manchester United”, “ManU”, “Liverpool2”}, from the entity sequences C( ⃗M) = { (Liverpool (film), Manchester United F.C., Manchester United F.C., Liverpool (film)), · · · , (Liverpool, F.C.,Manchester United, F.C., Manchester United F.C., Liverpool (film) }. Figure 1 illustrate our solution, where “Liverpool1” (on the left) and “Liverpool2” (on the right) are linked 1307 to “Liverpool F.C.” (the football club), and “Manchester United” and “ManU” are linked to “Manchester United F.C.”. Notably, “ManU” is an OOV mention, but has a similar mention “Manchester United”, with which “ManU” is successfully mapped. 4.2 Features We group features into three categories: local features related to mention-entity similarity (⃗f(e, m)), features related to entity-entity similarity (r(ei, ej)) , and features related to mention-mention similarity (s(mi, mj)). 4.2.1 Local Features • Prior Probability: f1(mi, ei) = count(ei) ∑ ∀ek∈C(mi) count(ek) (2) where count(e) denotes the frequency of entity e in Wikipedia’s anchor texts. • Context Similarity: f2(mi, ei) = coocurence number tweet length (3) where: coccurence number is the the number of the words that occur in both the tweet containing mi and the Wikipedia page of ei; tweet length denotes the number of tokens of the tweet containing mention mi. • Edit Distance Similarity: If Length(mi)+ED(mi, ei) = Length(ei), f3(mi, ei) = 1, otherwise 0. ED(·, ·) computes the character level edit distance. This feature helps to detect whether a mention is an abbreviation of its corresponding entity7. • Mention Contains Title: If the mention contains the entity title, namely the title of the Wikipedia page introducing the entity ei, f4(mi, ei) = 1, else 0. • Title Contains Mention: If the entry title contains the mention, f5(mi, ei) = 1, otherwise 0. 7Take “ms” and “Microsoft” for example. The length of “ms” is 2, and the edit distance between them is 7. 2 plus 7 equals to 9, which is the length of “Microsoft”. 4.2.2 Features Related to Entity Similarity There are two representative definitions of entity similarity: in-link based similarity (Milne and Witten, 2008) and category based similarity (Shen et al., 2012). Considering that the Wikipedia categories are often noisy (Milne and Witten, 2008), we adopt in-link based similarity, as defined in Formula 4: r(ei, ej) = log|g(ei) ∩g(ej)| −log max(|g(ei)|, |g(ej)|) log(Total) −log min(|g(ei)|, |g(ej)|) (4) Where: • Total is the total number of knowledge base entities; • g(e) is the number of Wikipedia definition pages that have a link to entity e. 4.2.3 Features Related to Mention Similarity We define 5 features to model the similarity between two mentions mi and mj, as listed below, where t(m) denotes the tweet that contains mention m: • s1(mi, mj): The cosine similarity of t(mi) and t(mj); and tweets are represented as TFIDF vectors; • s2(mi, mj): The cosine similarity of t(mi) and t(mj); and tweets are represented as topic distribution vectors; • s3(mi, mj): Whether t(mi) and t(mj) are published by the same account; • s4(mi, mj): Whether t(mi) and t(mj) contain any common hash tag; • s5(mi, mj): Edit distance related similarity between mi and mj, as defined in Formula 5. s5(mi, mj) = 1, if min{Length(mi), Length(mj)} +ED(mi, mj) = max{Length(mi), Length(mj)}, else s5(mi, mj) = 1 − ED(mi, mj) max{Length(mi), Length(mj)} (5) Note that: 1) before computing TF-IDF vectors, stop words are removed; 2) we use the Stanford Topic Modeling Toolbox8 to compute the topic model, and experimentally set the number of topics to 50. 8http://nlp.stanford.edu/software/tmt/tmt-0.4/ 1308 Finally, Formula 6 is used to integrate all the features. ⃗a = (a1, a2, a3, a4, a5) is the feature weight vector for mention similarity, where ak ∈ (0, 1), k = 1, 2, 3, 4, 5, and ∑5 k=1 ak = 1. s(mi, mj) = 5 ∑ k=1 aksk(mi, mj) (6) 4.3 Training and Decoding Given n mentions m1, m2, · · · , mn and their corresponding entities e1, e2, · · · , en, the goal of training is to determine: ⃗w∗, the weights of local features, and⃗a∗, the weights of the features related to mention similarity, according to Formula 7 9. (⃗w∗,⃗a∗) = arg min⃗w,⃗a{ 1 n n ∑ i=1 L1(ei, mi) +α1||⃗w||2 + α2 2 n ∑ i,j=1 s(mi, mj)L2(⃗a, ei, ej)} (7) Where: • L1 is the loss function related to local compatibility, which is defined as 1 ⃗w·⃗f(ei,mi)+1; • L2(⃗a, ei, ej) is the loss function related to global coherence, which is defined as 1 r(ei,ej) ∑5 k=1 aksk(mi,mj)+1; • α1 is the weight of regularization, which is experimentally set to 1.0; • α2 is the weight of L2 loss, which is experimentally set to 0.2. Since the decoding problem defined by Formula 1 is NP hard (Kulkarni et al., 2009), we develop a greedy hill-climbing approach to tackle this challenge, as demonstrated in Algorithm 1. In Algorithm 1, it is the number of iterations; Score( ⃗E, ⃗M) = λ ∑n i=1 ⃗w · ⃗f(ei, mi) + (1 − λ) ∑ i̸=j r(ei, ej)s(mi, mj); ⃗Eij is the vector after replacing ei with ej ∈C(mi) for current ⃗E; scij is the score of ⃗Eij, i.e., Score( ⃗Eij, ⃗M). In each iteration, this rounding solution iteratively substitute entry ei in ⃗E to increase the total score cur. If the score cannot be further improved, it stops and returns current ⃗E. 9This optimization problem is non-convex. We use coordinate descent to get a local optimal solution. Algorithm 1 Decoding Algorithm. Input: Mention Set ⃗M = (m1, m2, · · · , mn) Output: Entity Set ⃗E = (e1, e2, · · · , en) 1: for i = 1 to n do 2: Initialize e(0) i as the entity with the largest prior probability given mention mi. 3: end for 4: cur = Score( ⃗E(0), ⃗M) 5: it = 1 6: while true do 7: for i = 1 to n do 8: for ej ∈C(mi) do 9: if ej ̸= e(it−1) i then 10: ⃗E(it) ij = ⃗E(it−1) −{e(it−1) i } + {ej}. 11: end if 12: scij = Score( ⃗E(it) ij , ⃗M). 13: end for 14: end for 15: (l, m) = argmax(i,j)scij. 16: sc∗= sclm 17: if sc∗> cur then 18: cur = sc∗. 19: ⃗E(it) = ⃗E(it−1) −{e(it−1) l } + {em}. 20: it = it + 1. 21: else 22: break 23: end if 24: end while 25: return ⃗E(it). 5 Experiments In this section, we introduce the data set and experimental settings, and present results. 5.1 Data Preparation Following most existing studies, we choose Wikipedia as our knowledge base10. We index the Wikipedia definition pages, and prepare all required prior knowledge, such as count(e), g(e), and entity variation lists. We also build an inverted index with about 60 million entries for the entity variation lists. For tweets, we use the data set shared by Meij et al. (2012)11. This data set is annotated manually by two volunteers. We get 502 annotated tweets from this data set. We keep 55 of them for 10We download the December 2012 version of Wikipedia, which contains about four million articles. 11http://ilps.science.uva.nl/resources/wsdm2012-addingsemantics-to-microblog-posts/. 1309 development, and the remaining for 5 fold crossvalidation. 5.2 Settings We consider following settings to evaluate our method. • Comparing our method with two baselines, i.e., Wikify! (Mihalcea and Csomai, 2007) and the system proposed by Meij et al. (2012) 12; • Using only local features; • Using various mention similarity features; • Experiments on OOV mentions. 5.3 Results Table 1 reports the comparison results. Our method outperforms both systems in terms of all metrics. Since the main difference between our method and the baselines is that our method considers not only local features, but also global features related to entity similarity and mention similarity, these results indicate the effectiveness of collective inference and global features. For example, we find two baselines incorrectly link “Nickelodeon” in the tweet “BOH will make a special appearance on Nickelodeon’s ‘Yo Gabba Gabba’ tomorrow” to the theater instead of a TV channel. In contrast, our method notices that “Yo Gabba Gabba” in the same tweet can be linked to “Yo Gabba Gabba (TV show)”, and thus it correctly maps “Nickelodeon” to “Nickelodeon (TV channel)”. System Pre. Rec. F1 Wikify! 0.375 0.421 0.396 Meij’s Method 0.734 0.632 0.679 Our Method 0.752 0.675 0.711 Table 1: Comparison with Baselines. Table 2 shows the results when local features are incrementally added. It can be seen that: 1) using only Prior Probability feature already yields a reasonable F1; and 2) Context Similarity and Edit Distance Similarity feature have little contribution to the F1, while Mention and Entity Title Similarity feature greatly boosts the F1. 12We re-implement Wikify! since we use a new evaluation data set. Local Feature Pre. Rec. F1 P.P. 0.700 0.599 0.646 +C.S. 0.694 0.597 0.642 +E.D.S. 0.696 0.598 0.643 +M.E.T.S. 0.735 0.632 0.680 Table 2: Local Feature Analysis. P.P.,C.S., E.D.S., and M.E.T.S. denote Prior Probability, Context Similarity, Edit Distance Similarity, and Mention and Entity Title Similarity, respectively. The performance of our method with various mention similarity features is reported in Table 3. First, we can see that with this kind of features, the F1 can be significantly improved from 0.680 to 0.704. Second, we notice that TF-IDF (s1) and Topic Model (s2) features perform equally well, and combining all mention similarity features yields the best performance. Global Feature Pre. Rec. F1 s3+s4+s5 0.744 0.653 0.700 s3+s4+s5 +s1 0.759 0.652 0.702 s3+s4+s5+s2 0.760 0.653 0.703 s3+s4+s5+s1+s2 0.764 0.653 0.704 Table 3: Mention Similarity Feature Analysis. For any OOV mention, we use the strategy of guessing its possible entity candidates using similar mentions, as discussed in Section 4.1. Table 4 shows the performance of our system for OOV mentions. It can be seen that with our OOV strategy, the recall is improved from 0.653 to 0.675 (with p < 0.05) while the Precision is slightly dropped and the overall F1 still gets better. A further study reveals that among all the 125 OOV mentions, there are 48 for which our method cannot find any entity; and nearly half of these 48 OOV mentions do have corresponding entities 13. This suggests that we may need enlarge the size of variation lists or develop some mention normalization techniques. OOV Method Precision Recall F1 Ignore OOV Mention 0.764 0.653 0.704 + OOV Method 0.752 0.675 0.711 Table 4: Performance for OOV Mentions. 13“NATO-ukraine cooperations” is such an example. It is mapped to NULL but actually has a corresponding entity “Ukraine-NATO relations” 1310 6 Conclusions and Future work We have presented a collective inference method that jointly links a set of tweet mentions to their corresponding entities. One distinguished characteristic of our method is that it integrates mention-entity similarity, entity-entity similarity, and mention-mention similarity, to address the information lack in a tweet and rich OOV mentions. We evaluate our method on a public data set. Experimental results show our method outperforms two baselines, and suggests the effectiveness of modeling mention-mention similarity, particularly for OOV mention linking. In the future, we plan to explore two directions. First, we are going to enlarge the size of entity variation lists. Second, we want to integrate the entity mention normalization techniques as introduced by Liu et al. (2012). Acknowledgments We thank the anonymous reviewers for their valuable comments. We also thank all the QuickView team members for the helpful discussions. References S. Dill, N. Eiron, D. Gibson, D. Gruhl, and R. Guha. 2003. Semtag and seeker: bootstrapping the semantic web via automated semantic annotation. In Proceedings of the 12th international conference on World Wide Web, WWW ’03, pages 178–186, New York, NY, USA. ACM. Maxim Grinev, Maria Grineva, Alexander Boldakov, Leonid Novak, Andrey Syssoev, and Dmitry Lizorkin. 2009. Sifting micro-blogging stream for events of user interest. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’09, pages 837–837, New York, NY, USA. ACM. Xianpei Han and Jun Zhao. 2009. Nlpr-kbp in tac 2009 kbp track: A two-stage method to entity linking. In Proceedings of Test Analysis Conference. Xianpei Han and Jun Zhao. 2010. Structural semantic relatedness: a knowledge-based method to named entity disambiguation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Xianpei Han, Le Sun, and Jun Zhao. 2011. Collective entity linking in web text: A graph-based method. In SIGIR’11. Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of wikipedia entities in web text. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 457–465. Xiaohua Liu, Ming Zhou, Xiangyang Zhou, Zhongyang Fu, and Furu Wei. 2012. Joint inference of named entity recognition and normalization for tweets. In ACL (1), pages 526–535. Michael Mathioudakis and Nick Koudas. 2010. Twittermonitor: trend detection over the twitter stream. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, SIGMOD ’10, pages 1155–1158, New York, NY, USA. ACM. Edgar Meij, Wouter Weerkamp, and Maarten de Rijke. 2012. Adding semantics to microblog posts. In Proceedings of the fifth ACM international conference on Web search and data mining. Rada Mihalcea and Andras Csomai. 2007. Wikify!: linking documents to encyclopedic knowledge. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM ’07, pages 233–242, New York, NY, USA. ACM. David Milne and Ian H. Witten. 2008. Learning to link with wikipedia. In Proceeding of the 17th ACM conference on Information and knowledge management. Kevin Dela Rosa, Rushin Shah, Bo Lin, Anatole Gershman, and Robert Frederking. 2010. Topical clustering of tweets. In SWSM’10. Wei Shen, Jianyong Wang, Ping Luo, and Min Wang. 2012. Liege: Link entities in web lists with knowledge base. In KDD’12. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2011. Largescale cross-document coreference using distributed inference and hierarchical models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 793– 803, Stroudsburg, PA, USA. Association for Computational Linguistics. 1311
2013
128
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1312–1320, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Identification of Speakers in Novels Hua He† Denilson Barbosa ‡ Grzegorz Kondrak‡ †Department of Computer Science ‡Department of Computing Science University of Maryland University of Alberta [email protected] {denilson,gkondrak}@ualberta.ca Abstract Speaker identification is the task of attributing utterances to characters in a literary narrative. It is challenging to automate because the speakers of the majority of utterances are not explicitly identified in novels. In this paper, we present a supervised machine learning approach for the task that incorporates several novel features. The experimental results show that our method is more accurate and general than previous approaches to the problem. 1 Introduction Novels are important as social communication documents, in which novelists develop the plot by means of discourse between various characters. In spite of a frequently expressed opinion that all novels are simply variations of a certain number of basic plots (Tobias, 2012), every novel has a unique plot (or several plots) and a different set of characters. The interactions among characters, especially in the form of conversations, help the readers construct a mental model of the plot and the changing relationships between characters. Many of the complexities of interpersonal relationships, such as romantic interests, family ties, and rivalries, are conveyed by utterances. A precondition for understanding the relationship between characters and plot development in a novel is the identification of speakers behind all utterances. However, the majority of utterances are not explicitly tagged with speaker names, as is the case in stage plays and film scripts. In most cases, authors rely instead on the readers’ comprehension of the story and of the differences between characters. Since manual annotation of novels is costly, a system for automatically determining speakers of utterances would facilitate other tasks related to the processing of literary texts. Speaker identification could also be applied on its own, for instance in generating high quality audio books without human lectors, where each character would be identifiable by a distinct way of speaking. In addition, research on spoken language processing for broadcast and multi-party meetings (Salamin et al., 2010; Favre et al., 2009) has demonstrated that the analysis of dialogues is useful for the study of social interactions. In this paper, we investigate the task of speaker identification in novels. Departing from previous approaches, we develop a general system that can be trained on relatively small annotated data sets, and subsequently applied to other novels for which no annotation is available. Since every novel has its own set of characters, speaker identification cannot be formulated as a straightforward tagging problem with a universal set of fixed tags. Instead, we adopt a ranking approach, which enables our model to be applied to literary texts that are different from the ones it has been trained on. Our approach is grounded in a variety of features that are easily generalizable across different novels. Rather than attempt to construct complete semantic models of the interactions, we exploit lexical and syntactic clues in the text itself. We propose several novel features, including the speaker alternation pattern, the presence of vocatives in utterances, and unsupervised actor-topic features that associate speakers with utterances on the basis of their content. Experimental evaluation shows that our approach not only outperforms the baseline, but also compares favorably to previous approaches in terms of accuracy and generality, even when tested on novels and authors that are different from those used for training. The paper is organized as follows. After discussing previous work, and defining the terminology, we present our approach and the features that it is based on. Next, we describe the data, the an1312 notation details, and the results of our experimental evaluation. At the end, we discuss an application to extracting a set of family relationships from a novel. 2 Related Work Previous work on speaker identification includes both rule-based and machine-learning approaches. Glass and Bangay (2007) propose a rule generalization method with a scoring scheme that focuses on the speech verbs. The verbs, such as said and cried, are extracted from the communication category of WordNet (Miller, 1995). The speech-verb-actor pattern is applied to the utterance, and the speaker is chosen from the available candidates on the basis of a scoring scheme. Sarmento and Nunes (2009) present a similar approach for extracting speech quotes from online news texts. They manually define 19 variations of frequent speaker patterns, and identify a total of 35 candidate speech verbs. The rule-based methods are typically characterized by low coverage, and are too brittle to be reliably applied to different domains and changing styles. Elson and McKeown (2010) (henceforth referred to as EM2010) apply the supervised machine learning paradigm to a corpus of utterances extracted from novels. They construct a single feature vector for each pair of an utterance and a speaker candidate, and experiment with various WEKA classifiers and score-combination methods. To identify the speaker of a given utterance, they assume that all previous utterances are already correctly assigned to their speakers. Our approach differs in considering the utterances in a sequence, rather than independently from each other, and in removing the unrealistic assumption that the previous utterances are correctly identified. The speaker identification task has also been investigated in other domains. Bethard et al. (2004) identify opinion holders by using semantic parsing techniques with additional linguistic features. Pouliquen et al. (2007) aim at detecting direct speech quotations in multilingual news. Krestel et al. (2008) automatically tag speech sentences in newspaper articles. Finally, Ruppenhofer et al. (2010) implement a rule-based system to enrich German cabinet protocols with automatic speaker attribution. 3 Definitions and Conventions In this section, we introduce the terminology used in the remainder of the paper. Our definitions are different from those of EM2010 partly because we developed our method independently, and partly because we disagree with some of their choices. The examples are from Jane Austen’s Pride and Prejudice, which was the source of our development set. An utterance is a connected text that can be attributed to a single speaker. Our task is to associate each utterance with a single speaker. Utterances that are attributable to more than one speaker are rare; in such cases, we accept correctly identifying one of the speakers as sufficient. In some cases, an utterance may include more than one quotationdelimited sequence of words, as in the following example. “Miss Bingley told me,” said Jane, “that he never speaks much.” In this case, the words said Jane are simply a speaker tag inserted into the middle of the quoted sentence. Unlike EM2010, we consider this a single utterance, rather than two separate ones. We assume that all utterances within a paragraph can be attributed to a single speaker. This “one speaker per paragraph” property is rarely violated in novels — we identified only five such cases in Pride & Prejudice, usually involving one character citing another, or characters reading letters containing quotations. We consider this an acceptable simplification, much like assigning a single part of speech to each word in a corpus. We further assume that each utterance is contained within a single paragraph. Exceptions to this rule can be easily identified and resolved by detecting quotation marks and other typographical conventions. The paragraphs without any quotations are referred to as narratives. The term dialogue denotes a series of utterances together with related narratives, which provide the context of conversations. We define a dialogue as a series of utterances and intervening narratives, with no more than three continuous narratives. The rationale here is that more than three narratives without any utterances are likely to signal the end of a particular dialogue. We distinguish three types of utterances, which are listed with examples in Table 1: explicit speaker (identified by name within the paragraph), 1313 Category Example Implicit speaker “Don’t keep coughing so, Kitty, for heaven’s sake!” Explicit speaker “I do not cough for my own amusement,” replied Kitty. Anaphoric speaker “Kitty has no discretion in her coughs,” said her father. Table 1: Three types of utterances. anaphoric speaker (identified by an anaphoric expression), and implicit speaker (no speaker information within the paragraph). Typically, the majority of utterances belong to the implicit-speaker category. In Pride & Prejudice only roughly 25% of the utterances have explicit speakers, and an even smaller 15% belong to the anaphoric-speaker category. In modern fiction, the percentage of explicit attributions is even lower. 4 Speaker Identification In this section, we describe our method of extracting explicit speakers, and our ranking approach, which is designed to capture the speaker alternation pattern. 4.1 Extracting Speakers We extract explicit speakers by focusing on the speech verbs that appear before, after, or between quotations. The following verbs cover most cases in our development data: say, speak, talk, ask, reply, answer, add, continue, go on, cry, sigh, and think. If a verb from the above short list cannot be found, any verb that is preceded by a name or a personal pronoun in the vicinity of the utterance is selected as the speech verb. In order to locate the speaker’s name or anaphoric expression, we apply a deterministic method based on syntactic rules. First, all paragraphs that include narrations are parsed with a dependency parser. For example, consider the following paragraph: As they went downstairs together, Charlotte said, “I shall depend on hearing from you very often, Eliza.” The parser identifies a number of dependency relations in the text, such as dobj(went-3, downstairs4) and advmod(went-3, together-5). Our method extracts the speaker’s name from the dependency relation nsubj(said-8, Charlotte-7), which links a speech verb with a noun phrase that is the syntactic subject of a clause. Once an explicit speaker’s name or an anaphoric expression is located, we determine the corresponding gender information by referring to the character list or by following straightforward rules to handle the anaphora. For example, if the utterance is followed by the phrase she said, we infer that the gender of the speaker is female. 4.2 Ranking Model In spite of the highly sequential nature of the chains of utterances, the speaker identification task is difficult to model as sequential prediction. The principal problem is that, unlike in many NLP problems, a general fixed tag set cannot be defined beyond the level of an individual novel. Since we aim at a system that could be applied to any novel with minimal pre-processing, sequential prediction algorithms such as Conditional Random Fields are not directly applicable. We propose a more flexible approach that assigns scores to candidate speakers for each utterance. Although the sequential information is not directly modeled with tags, our system is able to indirectly utilize the speaker alternation pattern using the method described in the following section. We implement our approach with SVMrank (Joachims, 2006). 4.3 Speaker Alternation Pattern The speaker alternation pattern is often employed by authors in dialogues between two characters. After the speakers are identified explicitly at the beginning of a dialogue, the remaining oddnumbered and even-numbered utterances are attributable to the first and second speaker, respectively. If one of the speakers “misses their turn”, a clue is provided in the text to reset the pattern. Based on the speaker alternation pattern, we make the following two observations: 1. The speakers of consecutive utterances are usually different. 2. The speaker of the n-th utterance in a dialogue is likely to be the same as the speaker of the (n −2)-th utterance. Our ranking model incorporates the speaker alternation pattern by utilizing a feature expansion scheme. For each utterance n, we first generate its own features (described in Section 5), and 1314 Features Novelty Distance to Utterance No Speaker Appearance Count No Speaker Name in Utterance No Unsupervised Actor-Topic Model Yes Vocative Speaker Name Yes Neighboring Utterances Yes Gender Matching Yes Presence Matching Yes Table 2: Principal feature sets. subsequently we add three more feature sets that represent the following neighboring utterances: n −2, n −1 and n + 1. Informally, the features of the utterances n −1 and n + 1 encode the first observation, while the features representing the utterance n −2 encode the second observation. In addition, we include a set of four binary features that are set for the utterances in the range [n−2, n+1] if the corresponding explicit speaker matches the candidate speaker of the current utterance. 5 Features In this section, we describe the set of features used in our ranking approach. The principal feature sets are listed in Table 2, together with an indication whether they are novel or have been used in previous work. 5.1 Basic Features A subset of our features correspond to the features that were proposed by EM2010. These are mostly features related to speaker names. For example, since names of speakers are often mentioned in the vicinity of their utterances, we count the number of words separating the utterance and a name mention. However, unlike EM2010, we consider only the two nearest characters in each direction, to reflect the observation that speakers tend to be mentioned by name immediately before or after their corresponding utterances. Another feature is used to represent the number of appearances for speaker candidates. This feature reflects the relative importance of a given character in the novel. Finally, we use a feature to indicate the presence or absence of a candidate speaker’s name within the utterance. The intuition is that speakers are unlikely to mention their own name. Feature Example start of utterance “Kitty ... before period ...Jane. between commas ..., Elizabeth, ... between comma & period ..., Mrs. Hurst. before exclamation mark ...Mrs. Bennet! before question mark ...Lizzy?... vocative phrase Dear ... after vocative phrase Oh! Lydia ... 2nd person pronoun ...you ... Table 3: Features for the vocative identification. 5.2 Vocatives We propose a novel vocative feature, which encodes the character that is explicitly addressed in an utterance. For example, consider the following utterance: “I hope Mr. Bingley will like it, Lizzy.” Intuitively, the speaker of the utterance is neither Mr. Bingley nor Lizzy; however, the speaker of the next utterance is likely to be Lizzy. We aim at capturing this intuition by identifying the addressee of the utterance. We manually annotated vocatives in about 900 utterances from the training set. About 25% of the names within utterance were tagged as vocatives. A Logistic Regression classifier (Agresti, 2006) was trained to identify the vocatives. The classifier features are shown in Table 3. The features are designed to capture punctuation context, as well as the presence of typical phrases that accompany vocatives. We also incorporate interjections like “oh!” and fixed phrases like “my dear”, which are strong indicators of vocatives. Under 10-fold cross validation, the model achieved an Fmeasure of 93.5% on the training set. We incorporate vocatives in our speaker identification system by means of three binary features that correspond to the utterances n −1, n −2, and n −3. The features are set if the detected vocative matches the candidate speaker of the current utterance n. 5.3 Matching Features We incorporate two binary features for indicating the gender and the presence of a candidate speaker. The gender matching feature encodes the gender agreement between a speaker candidate and the speaker of the current utterance. The gender information extraction is applied to two utterance 1315 groups: the anaphoric-speaker utterances, and the explicit-speaker utterances. We use the technique described in Section 4.1 to determine the gender of a speaker of the current utterance. In contrast with EM2010, this is not a hard constraint. The presence matching feature indicates whether a speaker candidate is a likely participant in a dialogue. Each dialogue consists of continuous utterance paragraphs together with neighboring narration paragraphs as defined in Section 3. The feature is set for a given character if its name or alias appears within the dialogue. 5.4 Unsupervised Actor-Topic Features The final set of features is generated by the unsupervised actor-topic model (ACTM) (Celikyilmaz et al., 2010), which requires no annotated training data. The ACTM, as shown in Figure 1, extends the work of author-topic model in (RosenZvi et al., 2010). It can model dialogues in a literary text, which take place between two or more speakers conversing on different topics, as distributions over topics, which are also mixtures of the term distributions associated with multiple speakers. This follows the linguistic intuition that rich contextual information can be useful in understanding dialogues. Figure 1: Graphical Representation of ACTM. The ACTM predicts the most likely speakers of a given utterance by considering the content of an utterance and its surrounding contexts. The ActorTopic-Term probabilities are calculated by using both the relationship of utterances and the surrounding textual clues. In our system, we utilize four binary features that correspond to the four top ranking positions from the ACTM model. Figure 2: Annotation Tool GUI. 6 Data Our principal data set is derived from the text of Pride and Prejudice, with chapters 19–26 as the test set, chapters 27–33 as the development set, and the remaining 46 chapters as the training set. In order to ensure high-quality speaker annotations, we developed a graphical interface (Figure 2), which displays the current utterance in context, and a list of characters in the novel. After the speaker is selected by clicking a button, the text is scrolled automatically, with the next utterance highlighted in yellow. The complete novel was annotated by a student of English literature. The annotations are publicly available1. For the purpose of a generalization experiment, we also utilize a corpus of utterances from the 19th and 20th century English novels compiled by EM2010. The corpus differs from our data set in three aspects. First, as discussed in Section 3, we treat all quoted text within a single paragraph as a single utterance, which reduces the total number of utterances, and results in a more realistic reporting of accuracy. Second, our data set includes annotations for all utterances in the novel, as opposed to only a subset of utterances from several novels, which are not necessarily contiguous. Lastly, our annotations come from a single expert, while the annotations in the EM2010 corpus were collected through Amazon’s Mechanical Turk, and filtered by voting. For example, out of 308 utterances from The Steppe, 244 are in fact annotated, which raises the question whether the discarded utterances tend to be more difficult to annotate. Table 4 shows the number of utterances in all 1www.cs.ualberta.ca/˜kondrak/austen 1316 IS AS ES Total Pride & P. (all) 663 292 305 1260 Pride & P. (test) 65 29 32 126 Emma 236 55 106 397 The Steppe 93 39 112 244 Table 4: The number of utterances in various data sets by the type (IS - Implicit Speaker; AS - Anaphoric Speaker; ES - Explicit Speaker). data sets. We selected Jane Austen’s Emma as a different novel by the same author, and Anton Chekhov’s The Steppe as a novel by a different author for our generalization experiments. Since our goal is to match utterances to characters rather than to name mentions, a preprocessing step is performed to produce a list of characters in the novel and their aliases. For example, Elizabeth Bennet may be referred to as Liz, Lizzy, Miss Lizzy, Miss Bennet, Miss Eliza, and Miss Elizabeth Bennet. We apply a name entity tagger, and then group the names into sets of character aliases, together with their gender information. The sets of aliases are typically small, except for major characters, and can be compiled with the help of web resources, such as Wikipedia, or study guides, such as CliffsNotesTM. This preprocessing step could also be performed automatically using a canonicalization method (Andrews et al., 2012); however, since our focus is on speaker identification, we decided to avoid introducing annotation errors at this stage. Other preprocessing steps that are required for processing a new novel include standarizing the typographical conventions, and performing POS tagging, NER tagging, and dependency parsing. We utilize the Stanford tools (Toutanova et al., 2003; Finkel et al., 2005; Marneffe et al., 2006). 7 Evaluation In this section, we describe experiments conducted to evaluate our speaker identification approach. We refer to our main model as NEIGHBORS, because it incorporates features from the neighboring utterances, as described in Section 4.3. In contrast, the INDIVIDUAL model relies only on features from the current utterance. In an attempt to reproduce the evaluation methodology of EM2010, we also test the ORACLE model, which has access to the gold-standard information about the speakers of eight neighboring utterances in the Pride & P. Emma Steppe BASELINE 42.0 44.1 66.8 INDIVIDUAL 77.8 67.3 74.2 NEIGHBORS 82.5 74.8 80.3 ORACLE 86.5 80.1 83.6 Table 5: Speaker identification accuracy (in %) on Pride & Prejudice, Emma, and The Steppe. range [n −4, n + 4]. Lastly, the BASELINE approach selects the name that is the closest in the narration, which is more accurate than the “most recent name” baseline. 7.1 Results Table 5 shows the results of the models trained on annotated utterances from Pride & Prejudice on three test sets. As expected, the accuracy of all learning models on the test set that comes from the same novel is higher than on unseen novels. However, in both cases, the drop in accuracy for the NEIGHBORS model is less than 10%. Surprisingly, the accuracy is higher on The Steppe than on Emma, even though the different writing style of Chekhov should make the task more difficult for models trained on Austen’s prose. The protagonists of The Steppe are mostly male, and the few female characters rarely speak in the novel. This renders our gender feature virtually useless, and results in lower accuracy on anaphoric speakers than on explicit speakers. On the other hand, Chekhov prefers to mention speaker names in the dialogues (46% of utterances are in the explicit-speaker category), which makes his prose slightly easier in terms of speaker identification. The relative order of the models is the same on all three test sets, with the NEIGHBORS model consistently outperforming the INDIVIDUAL model, which indicates the importance of capturing the speaker alternation pattern. The performance of the NEIGHBORS model is actually closer to the ORACLE model than to the INDIVIDUAL model. Table 6 shows the results on Emma broken down according to the type of the utterance. Unsurprisingly, the explicit speaker is the easiest category, with nearly perfect accuracy. Both the INDIVIDUAL and the NEIGHBORS models do better on anaphoric speakers than on implicit speakers, which is also expected. However, it is not the 1317 IS AS ES Total INDIVIDUAL 52.5 67.3 100.0 67.3 NEIGHBORS 63.1 76.4 100.0 74.8 ORACLE 74.2 69.1 99.1 80.1 Table 6: Speaker identification accuracy (in %) on Austen’s Emma by the type of utterance. case for the ORACLE model. We conjecture that the ORACLE model relies heavily on the neighborhood features (which are rarely wrong), and consequently tends to downplay the gender information, which is the only information extracted from the anaphora. In addition, anaphoric speaker is the least frequent of the three categories. Table 7 shows the results of an ablation study performed to investigate the relative importance of features. The INDIVIDUAL model serves as the base model from which we remove specific features. All tested features appear to contribute to the overall performance, with the distance features and the unsupervised actor-topic features having the most pronounced impact. We conclude that the incorporation of the neighboring features, which is responsible for the difference between the INDIVIDUAL and NEIGHBORS models, is similar in terms of importance to our strongest textual features. Feature Impact Closest Mention -6.3 Unsupervised ACTM -5.6 Name within Utterance -4.8 Vocative -2.4 Table 7: Results of feature ablation (in % accuracy) on Pride & Prejudice. 7.2 Comparison to EM2010 In this section we analyze in more detail our results on Emma and The Steppe against the published results of the state-of-the-art EM2010 system. Recall that both novels form a part of the corpus that was created by EM2010 for the development of their system. Direct comparison to EM2010 is difficult because they compute the accuracy separately for seven different categories of utterances. For each category, they experiment with all combinations of three different classifiers and four score combination methods, and report only the accuracy Character id name gender . . . 9 Mr. Collins m 10 Charlotte f 11 Jane Bennet f 12 Elizabeth Bennet f . . . Relation from to type mode . . . 10 9 husband explicit 9 10 wife derived 10 12 friend explicit 12 10 friend derived 11 12 sister explicit . . . Figure 3: Relational database with extracted social network. achieved by the best performing combination on that category. In addition, they utilize the ground truth speaker information of the preceding utterances. Therefore, their results are best compared against our ORACLE approach. Unfortunately, EM2010 do not break down their results by novel. They report the overall accuracy of 63% on both “anaphora trigram” (our anaphoric speaker), and “quote alone” (similar to our implicit speaker). If we combine the two categories, the numbers corresponding to our NEIGHBORS model are 65.6% on Emma and 64.4% on The Steppe, while ORACLE achieves 73.2% and 70.5%, respectively. Even though a direct comparison is not feasible, the numbers are remarkable considering the context of the experiment, which strongly favors the EM2010 system. 8 Extracting Family Relationships In this section, we describe an application of the speaker identification system to the extraction of family relationships. Elson et al. (2010) extract unlabeled networks where the nodes represent characters and edges indicate their proximity, as indicated by their interactions. Our goal is to construct networks in which edges are labeled by the mutual relationships between characters in a novel. We focus on family relationships, but also include social relationships, such as friend 1318 INSERT INTO Relation (id1, id2, t, m) SELECT r.to AS id1, r.from AS id2 , ’wife’ AS t, ’derived’ AS m FROM Relation r WHERE r.type=’husband’ AND r.mode=’explicit’ AND NOT EXISTS(SELECT * FROM Relation r2 WHERE r2.from=r.to AND r2.to=r.from AND r2.type=t) Figure 4: An example inference rule. and attracted-to. Our approach to building a social network from the novel is to build an active database of relationships explicitly mentioned in the text, which is expanded by triggering the execution of queries that deduce implicit relations. This inference process is repeated for every discovered relationship until no new knowledge can be inferred. The following example illustrates how speaker identification helps in the extraction of social relations among characters. Consider, the following conversation: “How so? how can it affect them?” “My dear Mr. Bennet,” replied his wife, “how can you be so tiresome!” If the speakers are correctly identified, the utterances are attributed to Mr. Bennet and Mrs. Bennet, respectively. Furthermore, the second utterance implies that its speaker is the wife of the preceding speaker. This is an example of an explicit relationship which is included in our database. Several similar extraction rules are used to extract explicit mentions indicating family and affective relations, including mother, nephew, and fiancee. We can also derive relationships that are not explicitly mentioned in the text; for example, that Mr. Bennet is the husband of Mrs. Bennet. Figure 3 shows a snippet of the relational database of the network extracted from Pride & Prejudice. Table Character contains all characters in the book, each with a unique identifier and gender information, while Table Relation contains all relationships that are explicitly mentioned in the text or derived through reasoning. Figure 4 shows an example of an inference rule used in our system. The rule derives a new relationship indicating that character c1 is the wife of character c2 if it is known (through an explicit mention in the text) that c2 is the husband of c1. One condition for the rule to be applied is that the database must not already contain a record indicating the wife relationship. This inference rule would derive the tuple in Figure 3 indicating that the wife or Mr. Collins is Charlotte. In our experiment with Pride & Prejudice, a total of 55 explicitly indicated relationships were automatically identified once the utterances were attributed to the characters. From those, another 57 implicit relationships were derived through inference. A preliminary manual inspection of the set of relations extracted by this method (Makazhanov et al., 2012) indicates that all of them are correct, and include about 40% all personal relations that can be inferred by a human reader from the text of the novel. 9 Conclusion and Future Work We have presented a novel approach to identifying speakers of utterances in novels. Our system incorporates a variety of novel features which utilize vocatives, unsupervised actor-topic models, and the speaker alternation pattern. The results of our evaluation experiments indicate a substantial improvement over the current state of the art. There are several interesting directions for the future work. Although the approach introduced in this paper appears to be sufficiently general to handle novels written in a different style and period, more sophisticated statistical graphical models may achieve higher accuracy on this task. A reliable automatic generation of characters and their aliases would remove the need for the preprocessing step outlined in Section 6. The extraction of social networks in novels that we discussed in Section 8 would benefit from the introduction of additional inference rules, and could be extended to capture more subtle notions of sentiment or relationship among characters, as well as their development over time. We have demonstrated that speaker identification can help extract family relationships, but the converse is also true. Consider the following utterance: “Lizzy,” said her father, “I have given him my consent.” 1319 In order to deduce the speaker of the utterance, we need to combine the three pieces of information: (a) the utterance is addressed to Lizzy (vocative prediction), (b) the utterance is produced by Lizzy’s father (pronoun resolution), and (c) Mr. Bennet is the father of Lizzy (relationship extraction). Similarly, in the task of compiling a list of characters, which involves resolving aliases such as Caroline, Caroline Bingley, and Miss Bingley, simultaneous extraction of family relationships would help detect the ambiguity of Miss Benett, which can refer to any of several sisters. A joint approach to resolving speaker attribution, relationship extraction, co-reference resolution, and alias-to-character mapping would not only improve the accuracy on all these tasks, but also represent a step towards deeper understanding of complex plots and stories. Acknowledgments We would like to thank Asli Celikyilmaz for collaboration in the early stages of this project, Susan Brown and Michelle Di Cintio for help with data annotation, and David Elson for the attempt to compute the accuracy of the EM2010 system on Pride & Prejudice. This research was partially supported by the Natural Sciences and Engineering Research Council of Canada. References Alan Agresti. 2006. Building and applying logistic regression models. In An Introduction to Categorical Data Analysis. John Wiley & Sons, Inc. Nicholas Andrews, Jason Eisner, and Mark Dredze. 2012. Name phylogeny: A generative model of string variation. In EMNLP-CoNLL. Steven Bethard, Hong Yu, Ashley Thornton, Vasileios Hatzivassiloglou, and Dan Jurafsky. 2004. Automatic extraction of opinion propositions and their holders. In AAAI Spring Symposium on Exploring Attitude and Affect in Text. Asli Celikyilmaz, Dilek Hakkani-Tur, Hua He, Grzegorz Kondrak, and Denilson Barbosa. 2010. The actor-topic model for extracting social networks in literary narrative. In Proceedings of the NIPS 2010 Workshop - Machine Learning for Social Computing. David K. Elson and Kathleen McKeown. 2010. Automatic attribution of quoted speech in literary narrative. In AAAI. David K. Elson, Nicholas Dames, and Kathleen McKeown. 2010. Extracting social networks from literary fiction. In ACL. Sarah Favre, Alfred Dielmann, and Alessandro Vinciarelli. 2009. Automatic role recognition in multiparty recordings using social networks and probabilistic sequential models. In ACM Multimedia. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In ACL. Kevin Glass and Shaun Bangay. 2007. A naive salience-based method for speaker identification in fiction books. In Proceedings of the 18th Annual Symposium of the Pattern Recognition. Thorsten Joachims. 2006. Training linear SVMs in linear time. In KDD. Ralf Krestel, Sabine Bergler, and Ren´e Witte. 2008. Minding the source: Automatic tagging of reported speech in newspaper articles. In LREC. Aibek Makazhanov, Denilson Barbosa, and Grzegorz Kondrak. 2012. Extracting family relations from literary fiction. Unpublished manuscript. Marie Catherine De Marneffe, Bill Maccartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In LREC. George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38:39–41. Bruno Pouliquen, Ralf Steinberger, and Clive Best. 2007. Automatic detection of quotations in multilingual news. In RANLP. Michal Rosen-Zvi, Chaitanya Chemudugunta, Thomas L. Griffiths, Padhraic Smyth, and Mark Steyvers. 2010. Learning author-topic models from text corpora. ACM Trans. Inf. Syst., 28(1). Josef Ruppenhofer, Caroline Sporleder, and Fabian Shirokov. 2010. Speaker attribution in cabinet protocols. In LREC. Hugues Salamin, Alessandro Vinciarelli, Khiet Truong, and Gelareh Mohammadi. 2010. Automatic role recognition based on conversational and prosodic behaviour. In ACM Multimedia. Luis Sarmento and Sergio Nunes. 2009. Automatic extraction of quotes and topics from news feeds. In 4th Doctoral Symposium on Informatics Engineering. Ronald B. Tobias. 2012. 20 Master Plots: And How to Build Them. Writer’s Digest Books, 3rd edition. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In NAACL-HLT. 1320
2013
129
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 125–134, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Chinese Parsing Exploiting Characters Meishan Zhang†, Yue Zhang‡∗, Wanxiang Che†, Ting Liu† †Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China {mszhang, car, tliu}@ir.hit.edu.cn ‡Singapore University of Technology and Design yue [email protected] Abstract Characters play an important role in the Chinese language, yet computational processing of Chinese has been dominated by word-based approaches, with leaves in syntax trees being words. We investigate Chinese parsing from the character-level, extending the notion of phrase-structure trees by annotating internal structures of words. We demonstrate the importance of character-level information to Chinese processing by building a joint segmentation, part-of-speech (POS) tagging and phrase-structure parsing system that integrates character-structure features. Our joint system significantly outperforms a state-of-the-art word-based baseline on the standard CTB5 test, and gives the best published results for Chinese parsing. 1 Introduction Characters play an important role in the Chinese language. They act as basic phonetic, morphosyntactic and semantic units in a Chinese sentence. Frequently-occurring character sequences that express certain meanings can be treated as words, while most Chinese words have syntactic structures. For example, Figure 1(b) shows the structure of the word “建筑业(construction and building industry)”, where the characters “建(construction)” and “筑(building)” form a coordination, and modify the character “业(industry)”. However, computational processing of Chinese is typically based on words. Words are treated as the atomic units in syntactic parsing, machine translation, question answering and other NLP tasks. Manually annotated corpora, such as the Chinese Treebank (CTB) (Xue et al., 2005), usually have words as the basic syntactic elements ∗Email correspondence. 中国 建筑业 呈现 新 格局 NR NN VV JJ NN NP NP NP ADJP NP VP NP IP NP NP NP ADJP NP VP NP IP NN 局 NR-e 格 NR-b JJ 新 JJ-s VV 现 VV-e 呈 VV-b NN 业 NN-e 筑 NN-m 建 NN-b NR 国 NR-e NR-b 中 NP NP NP ADJP NP VP NP IP NN-c 局 NR-i 格 NR-b JJ-t 新 JJ-b VV-c 现 VV-i 呈 VV-b NR-r 国 NR-i NR-b 中 NR-t NN-r 业 NN-i 筑 NN-i 建 NN-b NN-c NN-t VV-t NN-t (a) CTB-style word-based syntax tree for “中国(China) 建 筑业(architecture industry) 呈现(show) 新(new) 格局 (pattern)”. 中国 建筑业 呈现 新 格局 NR NN VV JJ NN NP NP NP ADJP NP VP NP IP NP NP NP ADJP NP VP NP IP NN 局 NR-e 格 NR-b JJ 新 JJ-s VV 现 VV-e 呈 VV-b NN 业 NN-e 筑 NN-m 建 NN-b NR 国 NR-e NR-b 中 NP NP NP ADJP NP VP NP IP NN-c 局 NN-i 格 NN-b JJ-t 新 JJ-b VV-c 现 VV-i 呈 VV-b NR-r 国 NR-i NR-b 中 NR-t NN-r 业 NN-i 筑 NN-i 建 NN-b NN-c NN-t VV-t NN-t (b) character-level syntax tree with hierarchal word structures for “中(middle) 国(nation) 建(construction) 筑(building) 业(industry) 呈(present) 现(show) 新(new) 格(style) 局 (situation)”. Figure 1: Word-based and character-level phrasestructure trees for the sentence “中国建筑业呈现 新格局(China’s architecture industry shows new patterns)”, where “l”, “r”, “c” denote the directions of head characters (see section 2). (Figure 1(a)). This form of annotation does not give character-level syntactic structures for words, a source of linguistic information that is more fundamental and less sparse than atomic words. In this paper, we investigate Chinese syntactic parsing with character-level information by extending the notation of phrase-structure 125 (constituent) trees, adding recursive structures of characters for words. We manually annotate the structures of 37,382 words, which cover the entire CTB5. Using these annotations, we transform CTB-style constituent trees into character-level trees (Figure 1(b)). Our word structure corpus, together with a set of tools to transform CTB-style trees into character-level trees, is released at https://github.com/zhangmeishan/wordstructures. Our annotation work is in line with the work of Vadas and Curran (2007) and Li (2011), which provide extended annotations of Penn Treebank (PTB) noun phrases and CTB words (on the morphological level), respectively. We build a character-based Chinese parsing model to parse the character-level syntax trees. Given an input Chinese sentence, our parser produces its character-level syntax trees (Figure 1(b)). With richer information than word-level trees, this form of parse trees can be useful for all the aforementioned Chinese NLP applications. With regard to task of parsing itself, an important advantage of the character-level syntax trees is that they allow word segmentation, part-of-speech (POS) tagging and parsing to be performed jointly, using an efficient CKY-style or shift-reduce algorithm. Luo (2003) exploited this advantage by adding flat word structures without manually annotation to CTB trees, and building a generative character-based parser. Compared to a pipeline system, the advantages of a joint system include reduction of error propagation, and the integration of segmentation, POS tagging and syntax features. With hierarchical structures and head character information, our annotated words are more informative than flat word structures, and hence can bring further improvements to phrase-structure parsing. To analyze word structures in addition to phrase structures, our character-based parser naturally performs joint word segmentation, POS tagging and parsing jointly. Our model is based on the discriminative shift-reduce parser of Zhang and Clark (2009; 2011), which is a state-of-the-art word-based phrase-structure parser for Chinese. We extend their shift-reduce framework, adding more transition actions for word segmentation and POS tagging, and defining novel features that capture character information. Even when trained using character-level syntax trees with flat word structures, our joint parser outperforms a strong pipelined baseline that consists of a state-of-theNN-c NN-i NN-b 科 (science) 技 (technology) V VV-b 烧 (burn) NN-r NN-i NN-b 库 (repository) 存 (saving) NN-l VV-i VV-b 考 (investigate) 古 (ancient) NN-r NN-i NN-b 败 (bad) 类 (kind) A AD-b 徒 (vain) (a) subject-predicate. NN-c NN-i NN-b 科 (science) 技 (technology) VV-l VV-b 烧 (burn) NN-r NN-i NN-b 库 (repository) 存 (saving) NN-l VV-i VV-b 考 (investigate) 古 (ancient) NN-r NN-i NN-b 败 (bad) 类 (kind) AD-l AD-b 徒 (vain) (b) verb-object. NN-c NN-i NN-b 科 (science) 技 (technology) VV-b 烧 (burn) NN-r NN-i NN-b 库 (repository) 存 (saving) NN-l VV-i VV-b 考 (investigate) 古 (ancient) NN-r NN-i NN-b 败 (bad) 类 (kind) AD-b 徒 (vain) (c) coordination. NN-c NN-i NN-b 科 (science) 技 (technology) VV-l VV-b 烧 (burn) NN-r NN-i NN-b 库 (repository) 存 (saving) NN-l VV-i VV-b 考 (investigate) 古 (ancient) NN-r NN-i NN-b 败 (bad) 类 (kind) AD-l AD-b 徒 (vain) (d) modifier-noun. Figure 2: Inner word structures of “库存(repertory)”,“考古(archaeology)”, “科技(science and technology)” and “败类(degenerate)”. art joint segmenter and POS tagger, and our baseline word-based parser. Our word annotations lead to further improvements to the joint system, especially for phrase-structure parsing accuracy. Our parser work falls in line with recent work of joint segmentation, POS tagging and parsing (Hatori et al., 2012; Li and Zhou, 2012; Qian and Liu, 2012). Compared with related work, our model gives the best published results for joint segmentation and POS tagging, as well as joint phrase-structure parsing on standard CTB5 evaluations. With linear-time complexity, our parser is highly efficient, processing over 30 sentences per second with a beam size of 16. An open release of the parser is freely available at http://sourceforge.net/projects/zpar/, version 0.6. 2 Word Structures and Syntax Trees The Chinese language is a character-based language. Unlike alphabetical languages, Chinese characters convey meanings, and the meaning of most Chinese words takes roots in their character. For example, the word “计算机(computer)” is composed of the characters “计(count)”, “算(calculate)” and “机(machine)”. An informal name of “computer” is “电脑”, which is composed of “电 (electronic)” and “脑(brain)”. Chinese words have internal structures (Xue, 2001; Ma et al., 2012). The way characters interact within words can be similar to the way words interact within phrases. Figure 2 shows the structures of the four words “库存(repertory)”, “ 考古 126 VV-l VV-i 完 (up) AD-l AD-i 然 (so) NN-r NN-i NN-b 卧 (crouching) 虎 (tiger) NN-r NN-i NN-i 藏 (hidden) 龙 (dragon) NN-c VV-i VV-b 横 (fiercely) 扫 (sweep) VV-i VV-i 千 (thousands) 军 (troops) NN-c NN-i NN-b 教 (teach) 育 (education) NN-i 界 (field) NN NN-f NN-f 教育 (education) 界 (field) NN-c NN-i NN-b 朋 (friend) 友 (friend) NN-i 们 (plural) NN NN-f NN-f 朋友 (friend) 们 (plural) Figure 3: Character-level word structure of “卧虎 藏龙(crouching tiger hidden dragon)”. (archaeology)”, “科技(science and technology)” and “败类(degenerate)”, which demonstrate four typical syntactic structures of two-character words, including subject-predicate, verb-object, coordination and modifier-noun structures. Multicharacter words can also have recursive syntactic structures. Figure 3 illustrates the structure of the word “卧虎藏龙(crouching tiger hidden dragon)”, which is composed of two subwords “卧 虎(crouching tiger)” and “藏龙(hidden dragon)”, both having a modifier-noun structure. The meaning of characters can be a useful source of information for computational processing of Chinese, and some recent work has started to exploit this information. Zhang and Clark (2010) found that the first character in a Chinese word is a useful indicator of the word’s POS. They made use of this information to help joint word segmentation and POS tagging. Li (2011) studied the morphological structures of Chinese words, showing that 35% percent of the words in CTB5 can be treated as having morphemes. Figure 4(a) illustrates the morphological structures of the words “ 朋友们(friends)” and “教育界(educational world)”, in which the characters “们(plural)” and “界(field)” can be treated as suffix morphemes. They studied the influence of such morphology to Chinese dependency parsing (Li and Zhou, 2012). The aforementioned work explores the influence of particular types of characters to Chinese processing, yet not the full potentials of complete word structures. We take one step further in this line of work, annotating the full syntactic structures of 37,382 Chinese words in the form of Figure 2 and Figure 3. Our annotation covers the entire vocabulary of CTB5. In addition to difference in coverage (100% vs 35%), our annotation is structurally more informative than that of Li (2011), as illustrated in Figure 4(b). Our annotations are binarized recursive word V l VV-i 完 (up) AD-l AD-i 然 (so) NN-r NN-i NN-b 卧 (crouching) 虎 (tiger) NN-r NN-i NN-i 藏 (hidden) 龙 (dragon) NN-c 横 (fiercely) 扫 (sweep) 千 (thousands) 军 (troops) NN-i NN-b 教 (teach) 育 (education) 界 (field) NN NN-f NN-f 教育 (education) 界 (field) NN-i NN-b 朋 (friend) 友 (friend) 们 (plural) NN NN-f NN-f 朋友 (friend) 们 (plural) (a) morphological-level word structures, where “f” denotes a special mark for fine-grained words. VV-l VV-i VV-b 烧 (burn) 完 (up) AD-l AD-i AD-b 徒 (vain) 然 (so) NN-r NN-i NN-b 卧 (crouching) 虎 (tiger) NN-r NN-i NN-i 藏 (hidden) 龙 (dragon) NN-c VV-r VV-i VV-b 横 (fiercely) 扫 (sweep) VV-r VV-i VV-i 千 (thousands) 军 (troops) VV-l NN-c NN-i NN-b 教 (teach) 育 (education) NN-i 界 (field) NN-r NN NN-f NN-f 教育 (education) 界 (field) NN-c NN-i NN-b 朋 (friend) 友 (friend) NN-i 们 (plural) NN-l NN NN-f NN-f 朋友 (friend) 们 (plural) (b) character-level word structures. Figure 4: Comparison between character-level and morphological-level word structures. structures. For each word or subword, we specify its POS and head direction. We use “l”, “r” and “c” to indicate the “left”, “right” and “coordination” head directions, respectively. The “coordination” direction is mostly used in coordination structures, while a very small number of transliteration words, such as “奥巴马(Obama)” and “洛 杉矶(Los Angeles)”, have flat structures, and we use “coordination” for their left binarization. For leaf characters, we follow previous work on word segmentation (Xue, 2003; Ng and Low, 2004), and use “b” and “i” to indicate the beginning and nonbeginning characters of a word, respectively. The vast majority of words do not have structural ambiguities. However, the structures of some words may vary according to different POS. For example, “制服” means “dominate” when it is tagged as a verb, of which the head is the left character; the same word means “uniform dress” when tagged as a noun, of which the head is the right character. Thus the input of the word structure annotation is a word together with its POS. The annotation work was conducted by three persons, with one person annotating the entire corpus, and the other two checking the annotations. Using our annotations, we can extend CTBstyle syntax trees (Figure 1(a)) into characterlevel trees (Figure 1(b)). In particular, we mark the original nodes that represent POS tags in CTBstyle trees with “-t”, and insert our word structures as unary subnodes of the “-t” nodes. For the rest of the paper, we refer to the “-t” nodes as full-word nodes, all nodes above full-word nodes as phrase 127 nodes, and all nodes below full-word nodes as subword nodes. Our character-level trees contain additional syntactic information, which are potentially useful to Chinese processing. For example, the head characters of words can be populated up to phraselevel nodes, and serve as an additional source of information that is less sparse than head words. In this paper, we build a parser that yields characterlevel trees from raw character sequences. In addition, we use this parser to study the effects of our annotations to character-based statistical Chinese parsing, showing that they are useful in improving parsing accuracies. 3 Character-based Chinese Parsing To produce character-level trees for Chinese NLP tasks, we develop a character-based parsing model, which can jointly perform word segmentation, POS tagging and phrase-structure parsing. To our knowledge, this is the first work to develop a transition-based system that jointly performs the above three tasks. Trained using annotated word structures, our parser also analyzes the internal structures of Chinese words. Our character-based Chinese parsing model is based on the work of Zhang and Clark (2009), which is a transition-based model for lexicalized constituent parsing. They use a beam-search decoder so that the transition action sequence can be globally optimized. The averaged perceptron with early-update (Collins and Roark, 2004) is used to train the model parameters. Their transition system contains four kinds of actions: (1) SHIFT, (2) REDUCE-UNARY, (3) REDUCE-BINARY and (4) TERMINATE. The system can provide binarzied CFG trees in Chomsky Norm Form, and they present a reversible conversion procedure to map arbitrary CFG trees into binarized trees. In this work, we remain consistent with their work, using the head-finding rules of Zhang and Clark (2008), and the same binarization algorithm.1 We apply the same beam-search algorithm for decoding, and employ the averaged perceptron with early-update to train our model. We make two extensions to their work to enable joint segmentation, POS tagging and phrasestructure parsing from the character level. First, we modify the actions of the transition system for 1We use a left-binarization process for flat word structures that contain more than two characters. S2 stack ... ... queue Q0 Q1 ... S1 S1l S1r ... ... S0 S0l S0r ... ... Figure 5: A state in a transition-based model. parsing the inner structures of words. Second, we extend the feature set for our parsing problem. 3.1 The Transition System In a transition-based system, an input sentence is processed in a linear left-to-right pass, and the output is constructed by a state-transition process. We learn a model for scoring the transition Ai from one state STi to the next STi+1. As shown in Figure 5, a state ST consists of a stack S and a queue Q, where S = (· · · , S1, S0) contains partially constructed parse trees, and Q = (Q0, Q1, · · · , Qn−j) = (cj, cj+1, · · · , cn) is the sequence of input characters that have not been processed. The candidate transition action A at each step is defined as follows: • SHIFT-SEPARATE(t): remove the head character cj from Q, pushing a subword node S′ cj 2 onto S, assigning S′.t = t. Note that the parse tree S0 must correspond to a full-word or a phrase node, and the character cj is the first character of the next word. The argument t denotes the POS of S′. • SHIFT-APPEND: remove the head character cj from Q, pushing a subword node S′ cj onto S. cj will eventually be combined with all the subword nodes on top of S to form a word, and thus we must have S′.t = S0.t. • REDUCE-SUBWORD(d): pop the top two nodes S0 and S1 off S, pushing a new subword node S′ S1 S0 onto S. The argument d denotes the head direction of S′, of which the value can be “left”, “right” or “coordination”.3 Both S0 and S1 must be subword nodes and S′.t = S0.t = S1.t. 2We use this notation for a compact representation of a tree node, where the numerator represents a father node, and the denominator represents the children. 3For the head direction “coordination”, we extract the head character from the left node. 128 Category Feature templates When to Apply Structure S0ntl S0nwl S1ntl S1nwl S2ntl S2nwl S3ntl S3nwl, All features Q0c Q1c Q2c Q3c Q0c · Q1c Q1c · Q2c Q2c · Q3c, S0ltwl S0rtwl S0utwl S1ltwl S1rtwl S1utwl, S0nw · S1nw S0nw · S1nl S0nl · S1nw S0nl · S1nl, S0nw · Q0c S0nl · Q0c S1nw · Q0c S1nlQ0c, S0nl · S1nl · S2nl S0nw · S1nl · S2nl S0nl · S1nw · S2nl S0nl · S1nl · S2nw, S0nw · S1nl · Q0c S0nl · S1nw · Q0c S0nl · S1nl · Q0c, S0ncl S0nct S0nctl S1ncl S1nct S1nctl, S2ncl S2nct S2nctl S3ncl S3nct S3nctl, S0nc · S1nc S0ncl · S1nl S0nl · S1ncl S0ncl · S1ncl, S0nc · Q0c S0nl · Q0c S1nc · Q0c S1nl · Q0c, S0nc · S1nc · Q0c S0nc · S1nc · Q0c · Q1c start(S0w) · start(S1w) start(S0w) · end(S1w), REDUCE-SUBWORD indict(S1wS0w) · len(S1wS0w) indict(S1wS0w, S0t) · len(S1wS0w) String t−1 · t0 t−2 · t−1t0 w−1 · t0 c0 · t0 start(w−1) · t0 c−1 · c0 · t−1 · t0, SHIFT-SEPARATE features w−1 w−2 · w−1 w−1, where len(w−1) = 1 end(w−1) · c0, REDUCE-WORD start(w−1) · len(w−1) end(w−1) · len(w−1) start(w−1) · end(w−1), w−1 · c0 end(w−2) · w−1 start(w−1) · c0 end(w−2) · end(w−1), w−1 · len(w−2) w−2 · len(w−1) w−1 · t−1 w−1 · t−2 w−1 · t−1 · c0, w−1 · t−1 · end(w−2) c−2 · c−1 · c0 · t−1, where len(w−1) = 1 end(w−1) · t−1, c · t−1 · end(w−1), where c ∈w−1 and c ̸= end(w−1) c0 · t−1 c−1 · c0 start(w−1) · c0t−1 c−1 · c0 · t−1 SHIFT-APPEND Table 1: Feature templates for the character-level parser. The function start(·), end(·) and len(·) denote the first character, the last character and the length of a word, respectively. • REDUCE-WORD: pop the top node S0 off S, pushing a full-word node S′ S0 onto S. This reduce action generates a full-word node from S0, which must be a subword node. • REDUCE-BINARY(d, l): pop the top two nodes S0 and S1 off S, pushing a binary phrase node S′ S1 S0 onto S. The argument l denotes the constituent label of S′, and the argument d specifies the lexical head direction of S′, which can be either “left” or “right”. Both S0 and S1 must be a full-word node or a phrase node. • REDUCE-UNARY(l): pop the top node S0 off S, pushing a unary phrase node S′ S0 onto S. l denotes the constituent label of S′. • TERMINATE: mark parsing complete. Compared to set of actions in our baseline transition-based phrase-structure parser, we have made three major changes. First, we split the original SHIFT action into SHIFT-SEPARATE(t) and SHIFT-APPEND, which jointly perform the word segmentation and POS tagging tasks. Second, we add an extra REDUCE-SUBWORD(d) operation, which is used for parsing the inner structures of words. Third, we add REDUCE-WORD, which applies a unary rule to mark a completed subword node as a full-word node. The new node corresponds to a unary “-t” node in Figure 1(b). 3.2 Features Table 1 shows the feature templates of our model. The feature set consists of two categories: (1) structure features, which encode the structural information of subwords, full-words and phrases. (2) string features, which encode the information of neighboring characters and words. For the structure features, the symbols S0, S1, S2, S3 represent the top four nodes on the stack; Q0, Q1, Q2, Q3 denote the first four characters in the queue; S0l, S0r, S0u represent the left, right child for a binary branching S0, and the single child for a unary branching S0, respectively; S1l, S1r, S1u represent the left, right child for a binary branching S1, and the single child for a unary branching S1, respectively; n represents the type for a node; it is a binary value that indicates whether the node is a subword node; c, w, t and l represent the head character, word (or subword), POS tag and constituent label of a node, respectively. The structure features are mostly taken 129 from the work of Zhang and Clark (2009). The feature templates in bold are novel, are designed to encode head character information. In particular, the indict function denotes whether a word is in a tag dictionary, which is collected by extracting all multi-character subwords that occur more than five times in the training corpus. For string features, c0, c−1 and c−2 represent the current character and its previous two characters, respectively; w−1 and w−2 represent the previous two words to the current character, respectively; t0, t−1 and t−2 represent the POS tags of the current word and the previous two words, respectively. The string features are used for word segmentation and POS tagging, and are adapted from a state-of-the-art joint segmentation and tagging model (Zhang and Clark, 2010). In summary, our character-based parser contains the word-based features of constituent parser presented in Zhang and Clark (2009), the wordbased and shallow character-based features of joint word segmentation and POS tagging presented in Zhang and Clark (2010), and additionally the deep character-based features that encode word structure information, which are the first presented by this paper. 4 Experiments 4.1 Setting We conduct our experiments on the CTB5 corpus, using the standard split of data, with sections 1–270,400–931 and 1001–1151 for training, sections 301–325 for system development, and sections 271–300 for testing. We apply the same preprocessing step as Harper and Huang (2011), so that the non-terminal yield unary chains are collapsed to single unary rules. Since our model can jointly process word segmentation, POS tagging and phrase-structure parsing, we evaluate our model for the three tasks, respectively. For word segmentation and POS tagging, standard metrics of word precision, recall and F-score are used, where the tagging accuracy is the joint accuracy of word segmentation and POS tagging. For phrase-structure parsing, we use the standard parseval evaluation metrics on bracketing precision, recall and F-score. As our constituent trees are based on characters, we follow previous work and redefine the boundary of a constituent span by its start and end characters. In addition, we evaluate the performance of word 65 70 75 80 85 90 95 0 10 20 30 40 64b 16b 4b 1b (a) Joint segmentation and POS tagging F-scores. 30 40 50 60 70 80 90 0 10 20 30 40 64b 16b 4b 1b (b) Joint constituent parsing F-scores. Figure 6: Accuracies against the training epoch for joint segmentation and tagging as well as joint phrase-structure parsing using beam sizes 1, 4, 16 and 64, respectively. structures, using the word precision, recall and Fscore metrics. A word structure is correct only if the word and its internal structure are both correct. 4.2 Development Results Figure 6 shows the accuracies of our model using different beam sizes with respect to the training epoch. The performance of our model increases as the beam size increases. The amount of increases becomes smaller as the size of the beam grows larger. Tested using gcc 4.7.2 and Fedora 17 on an Intel Core i5-3470 CPU (3.20GHz), the decoding speeds are 318.2, 98.0, 30.3 and 7.9 sentences per second with beam size 1, 4, 16 and 64, respectively. Based on this experiment, we set the beam size 64 for the rest of our experiments. The character-level parsing model has the advantage that deep character information can be extracted as features for parsing. For example, the head character of a word is exploited in our model. We conduct feature ablation experiments to evaluate the effectiveness of these features. We find that the parsing accuracy decreases about 0.6% when the head character related features (the bold feature templates in Table 1) are removed, which demonstrates the usefulness of these features. 4.3 Final Results In this section, we present the final results of our model, and compare it to two baseline systems, a pipelined system and a joint system that is trained with automatically generated flat words structures. The baseline pipelined system consists of the joint segmentation and tagging model proposed by 130 Task P R F Pipeline Seg 97.35 98.02 97.69 Tag 93.51 94.15 93.83 Parse 81.58 82.95 82.26 Flat word Seg 97.32 98.13 97.73 structures Tag 94.09 94.88 94.48 Parse 83.39 83.84 83.61 Annotated Seg 97.49 98.18 97.84 word structures Tag 94.46 95.14 94.80 Parse 84.42 84.43 84.43 WS 94.02 94.69 94.35 Table 2: Final results on test corpus. Zhang and Clark (2010), and the phrase-structure parsing model of Zhang and Clark (2009). Both models give state-of-the-art performances, and are freely available.4 The model for joint segmentation and POS tagging is trained with a 16beam, since it achieves the best performance. The phrase-structure parsing model is trained with a 64-beam. We train the parsing model using the automatically generated POS tags by 10-way jackknifing, which gives about 1.5% increases in parsing accuracy when tested on automatic segmented and POS tagged inputs. The joint system trained with flat word structures serves to test the effectiveness of our joint parsing system over the pipelined baseline, since flat word structures do not contain additional sources of information over the baseline. It is also used to test the usefulness of our annotation in improving parsing accuracy. Table 2 shows the final results of our model and the two baseline systems on the test data. We can see that both character-level joint models outperform the pipelined system; our model with annotated word structures gives an improvement of 0.97% in tagging accuracy and 2.17% in phrase-structure parsing accuracy. The results also demonstrate that the annotated word structures are highly effective for syntactic parsing, giving an absolute improvement of 0.82% in phrase-structure parsing accuracy over the joint model with flat word structures. Row “WS” in Table 2 shows the accuracy of hierarchical word-structure recovery of our joint system. This figure can be useful for high-level applications that make use of character-level trees by 4http://sourceforge.net/projects/zpar/, version 0.5. our parser, as it reflects the capability of our parser in analyzing word structures. In particular, the performance of parsing OOV word structure is an important metric of our parser. The recall of OOV word structures is 60.43%, while if we do not consider the influences of segmentation and tagging errors, counting only the correctly segmented and tagged words, the recall is 87.96%. 4.4 Comparison with Previous Work In this section, we compare our model to previous systems on the performance of joint word segmentation and POS tagging, and the performance of joint phrase-structure parsing. Table 3 shows the results. Kruengkrai+ ’09 denotes the results of Kruengkrai et al. (2009), which is a lattice-based joint word segmentation and POS tagging model; Sun ’11 denotes a subword based stacking model for joint segmentation and POS tagging (Sun, 2011), which uses a dictionary of idioms; Wang+ ’11 denotes a semisupervised model proposed by Wang et al. (2011), which additionally uses the Chinese Gigaword Corpus; Li ’11 denotes a generative model that can perform word segmentation, POS tagging and phrase-structure parsing jointly (Li, 2011); Li+ ’12 denotes a unified dependency parsing model that can perform joint word segmentation, POS tagging and dependency parsing (Li and Zhou, 2012); Li ’11 and Li+ ’12 exploited annotated morphological-level word structures for Chinese; Hatori+ ’12 denotes an incremental joint model for word segmentation, POS tagging and dependency parsing (Hatori et al., 2012); they use external dictionary resources including HowNet Word List and page names from the Chinese Wikipedia; Qian+ ’12 denotes a joint segmentation, POS tagging and parsing system using a unified framework for decoding, incorporating a word segmentation model, a POS tagging model and a phrasestructure parsing model together (Qian and Liu, 2012); their word segmentation model is a combination of character-based model and word-based model. Our model achieved the best performance on both joint segmentation and tagging as well as joint phrase-structure parsing. Our final performance on constituent parsing is by far the best that we are aware of for the Chinese data, and even better than some state-of-the-art models with gold segmentation. For example, the un-lexicalized PCFG model of Petrov and Klein 131 System Seg Tag Parse Kruengkrai+ ’09 97.87 93.67 – Sun ’11 98.17* 94.02* – Wang+ ’11 98.11* 94.18* – Li ’11 97.3 93.5 79.7 Li+ ’12 97.50 93.31 – Hatori+ ’12 98.26* 94.64* – Qian+ ’12 97.96 93.81 82.85 Ours pipeline 97.69 93.83 82.26 Ours joint flat 97.73 94.48 83.61 Ours joint annotated 97.84 94.80 84.43 Table 3: Comparisons of our final model with state-of-the-art systems, where “*” denotes that external dictionary or corpus has been used. (2007) achieves 83.45%5 in parsing accuracy on the test corpus, and our pipeline constituent parsing model achieves 83.55% with gold segmentation. They are lower than the performance of our character-level model, which is 84.43% without gold segmentation. The main differences between word-based and character-level parsing models are that character-level model can exploit character features. This further demonstrates the effectiveness of characters in Chinese parsing. 5 Related Work Recent work on using the internal structure of words to help Chinese processing gives important motivations to our work. Zhao (2009) studied character-level dependencies for Chinese word segmentation by formalizing segmentsion task in a dependency parsing framework. Their results demonstrate that annotated word dependencies can be helpful for word segmentation. Li (2011) pointed out that the word’s internal structure is very important for Chinese NLP. They annotated morphological-level word structures, and a unified generative model was proposed to parse the Chinese morphological and phrase-structures. Li and Zhou (2012) also exploited the morphologicallevel word structures for Chinese dependency parsing. They proposed a unified transition-based model to parse the morphological and dependency structures of a Chinese sentence in a unified framework. The morphological-level word struc5We rerun the parser and evaluate it using the publiclyavailable code on http://code.google.com/p/berkeleyparser by ourselves, since we have a preprocessing step for the CTB5 corpus. tures concern only prefixes and suffixes, which cover only 35% of entire words in CTB. According to their results, the final performances of their model on word segmentation and POS tagging are below the state-of-the-art joint segmentation and POS tagging models. Compared to their work, we consider the character-level word structures for Chinese parsing, presenting a unified framework for segmentation, POS tagging and phrasestructure parsing. We can achieve improved segmentation and tagging performance. Our character-level parsing model is inspired by the work of Zhang and Clark (2009), which is a transition-based model with a beam-search decoder for word-based constituent parsing. Our work is based on the shift-reduce operations of their work, while we introduce additional operations for segmentation and POS tagging. By such an extension, our model can include all the features in their work, together with the features for segmentation and POS tagging. In addition, we propose novel features related to word structures and interactions between word segmentation, POS tagging and word-based constituent parsing. Luo (2003) was the first work to introduce the character-based syntax parsing. They use it as a joint framework to perform Chinese word segmentation, POS tagging and syntax parsing. They exploit a generative maximum entropy model for character-based constituent parsing, and find that POS information is very useful for Chinese word segmentation, but high-level syntactic information seems to have little effect on segmentation. Compared to their work, we use a transition-based discriminative model, which can benefit from large amounts of flexible features. In addition, instead of using flat structures, we manually annotate hierarchal tree structures of Chinese words for converting word-based constituent trees into character-based constituent trees. Hatori et al. (2012) proposed the first joint work for the word segmentation, POS tagging and dependency parsing. They used a single transitionbased model to perform the three tasks. Their work demonstrates that a joint model can improve the performance of the three tasks, particularly for POS tagging and dependency parsing. Qian and Liu (2012) proposed a joint decoder for word segmentation, POS tagging and word-based constituent parsing, although they trained models for the three tasks separately. They reported better 132 performances when using a joint decoder. In our work, we employ a single character-based discriminative model to perform segmentation, POS tagging and phrase-structure parsing jointly, and study the influence of annotated word structures. 6 Conclusions and Future Work We studied the internal structures of more than 37,382 Chinese words, analyzing their structures as the recursive combinations of characters. Using these word structures, we extended the CTB into character-level trees, and developed a characterbased parser that builds such trees from raw character sequences. Our character-based parser performs segmentation, POS tagging and parsing simultaneously, and significantly outperforms a pipelined baseline. We make both our annotations and our parser available online. In summary, our contributions include: • We annotated the internal structures of Chinese words, which are potentially useful to character-based studies of Chinese NLP. We extend CTB-style constituent trees into character-level trees using our annotations. • We developed a character-based parsing model that can produce our character-level constituent trees. Our parser jointly performs word segmentation, POS tagging and syntactic parsing. • We investigated the effectiveness of our joint parser over pipelined baseline, and the effectiveness of our annotated word structures in improving parsing accuracies. Future work includes investigations of our parser and annotations on Chinese NLP tasks. Acknowledgments This work was supported by National Natural Science Foundation of China (NSFC) via grant 61133012, the National “863” Major Projects via grant 2011AA01A207, the National “863” Leading Technology Research Project via grant 2012AA011102, and SRG ISTD 2012 038 from Singapore University of Technology and Design. References Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 111–118, Barcelona, Spain, July. Mary Harper and Zhongqiang Huang. 2011. Chinese statistical parsing. Handbook of Natural Language Processing and Machine Translation. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2012. Incremental joint approach to word segmentation, pos tagging, and dependency parsing in chinese. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1045– 1053, Jeju Island, Korea, July. Association for Computational Linguistics. Canasai Kruengkrai, Kiyotaka Uchimoto, Jun’ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An error-driven word-character hybrid model for joint chinese word segmentation and pos tagging. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 513–521, Suntec, Singapore, August. Association for Computational Linguistics. Zhongguo Li and Guodong Zhou. 2012. Unified dependency parsing of chinese morphological and syntactic structures. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1445–1454, Jeju Island, Korea, July. Association for Computational Linguistics. Zhongguo Li. 2011. Parsing the internal structure of words: A new paradigm for chinese word segmentation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1405–1414, Portland, Oregon, USA, June. Association for Computational Linguistics. Xiaoqiang Luo. 2003. A maximum entropy Chinese character-based parser. In Michael Collins and Mark Steedman, editors, Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 192–199. Jianqiang Ma, Chunyu Kit, and Dale Gerdemann. 2012. Semi-automatic annotation of chinese word structure. In Proceedings of the Second CIPSSIGHAN Joint Conference on Chinese Language Processing, pages 9–17, Tianjin, China, December. Association for Computational Linguistics. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese partof-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 277–284, Barcelona, Spain, July. Association for Computational Linguistics. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language 133 Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404–411, Rochester, New York, April. Association for Computational Linguistics. Xian Qian and Yang Liu. 2012. Joint chinese word segmentation, pos tagging and parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 501–511, Jeju Island, Korea, July. Association for Computational Linguistics. Weiwei Sun. 2011. A stacked sub-word model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1385– 1394, Portland, Oregon, USA, June. Association for Computational Linguistics. David Vadas and James Curran. 2007. Adding noun phrase structure to the penn treebank. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 240–247, Prague, Czech Republic, June. Association for Computational Linguistics. Yiou Wang, Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 309–317, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Nianwen Xue. 2001. Defining and Automatically Identifying Words in Chinese. Ph.D. thesis, University of Delaware. Nianwen Xue. 2003. Chinese word segmentation as character tagging. International Journal of Computational Linguistics and Chinese Language Processing, 8(1). Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 562– 571, Honolulu, Hawaii, October. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2009. Transitionbased parsing of the chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09), pages 162–171, Paris, France, October. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and POS-tagging using a single discriminative model. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 843–852, Cambridge, MA, October. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105–151. Hai Zhao. 2009. Character-level dependencies in chinese: Usefulness and learning. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 879–887, Athens, Greece, March. Association for Computational Linguistics. 134
2013
13
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1321–1330, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Language Acquisition and Probabilistic Models: keeping it simple Aline Villavicencio♣, Marco Idiart♥Robert Berwick♦, Igor Malioutov♠ ♣Institute of Informatics, Federal University of Rio Grande do Sul (Brazil) ♥Institute of Physics, Federal University of Rio Grande do Sul (Brazil) ♦LIDS, Dept. of EECS, Massachusetts Institute of Technology (USA) ♠CSAIL, Dept. of EECS, Massachusetts Institute of Technology (USA) [email protected], [email protected] [email protected], [email protected] Abstract Hierarchical Bayesian Models (HBMs) have been used with some success to capture empirically observed patterns of under- and overgeneralization in child language acquisition. However, as is well known, HBMs are “ideal” learning systems, assuming access to unlimited computational resources that may not be available to child language learners. Consequently, it remains crucial to carefully assess the use of HBMs along with alternative, possibly simpler, candidate models. This paper presents such an evaluation for a language acquisition domain where explicit HBMs have been proposed: the acquisition of English dative constructions. In particular, we present a detailed, empiricallygrounded model-selection comparison of HBMs vs. a simpler alternative based on clustering along with maximum likelihood estimation that we call linear competition learning (LCL). Our results demonstrate that LCL can match HBM model performance without incurring on the high computational costs associated with HBMs. 1 Introduction In recent years, with advances in probability and estimation theory, there has been much interest in Bayesian models (BMs) (Chater, Tenenbaum, and Yuille, 2006; Jones and Love, 2011) and their application to child language acquisition with its challenging combination of structured information and incomplete knowledge, (Perfors, Tenenbaum, and Wonnacott, 2010; Hsu and Chater, 2010; Parisien, Fazly, and Stevenson, 2008; Parisien and Stevenson, 2010) as they offer several advantages in this domain. They can readily handle the evident noise and ambiguity of acquisition input, while at the same time providing efficiency via priors that mirror known pre-existing language biases. Further, hierarchical Bayesian Models (HBMs) can combine distinct abstraction levels of linguistic knowledge, from variation at the level of individual lexical items, to cross-item variation, using hyper-parameters to capture observed patterns of both under- and over-generalization as in the acquisition of e.g. dative alternations in English (Hsu and Chater, 2010; Perfors, Tenenbaum, and Wonnacott, 2010), and verb frames in a controlled artificial language (Wonnacott, Newport, and Tanenhaus, 2008). HBMs can thus be viewed as providing a “rational” upper bound on language learnability, yielding optimal models that account for observed data while minimizing any required prior information. In addition, the clustering implicit in HBM modeling introduces additional parameters that can be tuned to specific data patterns. However, this comes at a well-known price: HBMs generally are also ideal learning systems, known to be computationally infeasible (Kwisthout, Wareham, and van Rooij, 2011). Approximations proposed to ensure computational tractability, like reducing the number of classes that need to be learned may also be linguistically and cognitively implausible. For instance, in terms of verb learning, this could 1321 take the form of reducing the number of subcategorization frames to the relevant subset, as in (Perfors, Tenenbaum, and Wonnacott, 2010), where only 2 frames are considered for ‘take’, when in fact it is listed in 6 frames by Levin (1993). Finally, comparison of various Bayesian models of the same task is rare (Jones and Love, 2011) and Bayesian inference generally can be demonstrated as simply one class of regularization or smoothing techniques among many others; given the problem at hand, there may well be other, equally compelling regularization methods for dealing with the bias-variance dilemma (e.g., SVMs (Shalizi, 2009)). Consequently, the relevance of HBMs for cognitively accurate accounts of human learning remains uncertain and needs to be carefully assessed. Here we argue that the strengths of HBMs for a given task must be evaluated in light of their computational and cognitive costs, and compared to other viable alternatives. The focus should be on finding the simplest statistical models consistent with a given behavior, particularly one that aligns with known cognitive limitations. In the case of many language acquisition tasks this behavior often takes the form of overgeneralization, but with eventual convergence to some target language given exposure to more data. In particular, in this paper we consider how children acquire English dative verb constructions, comparing HBMs to a simpler alternative, a linear competition learning (LCL) algorithm that models the behavior of a given verb as the linear competition between the evidence for that verb, and the average behavior of verbs belonging to its same class. The results show that combining simple clustering methods along with ordinary maximum likelihood estimation yields a result comparable to HBM performance, providing an alternative account of the same facts, without the computational costs incurred by HBM models that must rely, for example, on Markov Chain Monte Carlo (MCMC) methods for numerically integrating complex likelihood integrals, or on Chinese Restaurant Process (CRP) for producing partitions. In terms of Marr’s hierarchy (Marr, 1982) learning verb alternations is an abstract computational problem (Marr’s type I), solvable by many type II methods combining representations (models, viz. HBMs or LCLs) with particular algorithms. The HBM convention of adopting ideal learning amounts to invoking unbounded algorithmic resources, solvability in principle, even though in practice such methods, even approximate ones, are provably NP-hard (cf. (Kwisthout, Wareham, and van Rooij, 2011)). Assuming cognitive plausibility as a desideratum, we therefore examine whether HBMs can also be approximated by another type II method (LCLs) that does not demand such intensive computation. Any algorithm that approximates an HBM can be viewed as implementing a somewhat different underlying model; if it replicates HBM prediction performance but is simpler and less computationally complex then we assume it is preferable. This paper is organized as follows: we start with a discussion of formalizations of language acquisition tasks, §2. We present our experimental framework for the dative acquisition task, formalizing a range of learning models from simple MLE methods to HBM techniques, §3, and a computational evaluation of each model, §4. We finish with conclusions and possibilities for future work, §5. 2 Evidence in Language Acquisition A familiar problem for language acquisition is how children learn which verbs participate in so-called dative alternations, exemplified by the child-produced sentences 1 to 3, from the Brown (1973) corpus in CHILDES (MacWhinney, 1995). 1. you took me three scrambled eggs (a direct object dative (DOD) from Adam at age 3;6) 2. Mommy can you fix dis for me ? (a prepositional dative (PD) from Adam at age 4;7) 3. *Mommy, fix me my tiger (from Adam at age 5;2) Examples like these show that children generalize their use of verbs. For example, in sentence (1), the child Adam uses take as a DOD before any recorded occurrence of a similar use of take in adult speech to Adam. Such verbs alternate because they can also occur with a prepositional form, as in sentence (2). However, sometimes a child’s use of verbs like 1322 these amounts to an overgeneralization – that is, their productive use of a verb in a pattern that does not occur in the adult grammar, as in sentence (3), above. Faced with these two verb frames the task for the learner is to decide for a particular verb if it is a non-alternating DOD only verb, a PD only verb, or an alternating verb that allows both forms. This ambiguity raises an important learnability question, conventionally known as Baker’s paradox (Baker, 1979). On the assumption that children only receive positive examples of verb forms, then it is not clear how they might recover from the overgeneralization exhibited in sentence (3) above, because they will never receive positive sentences from adults like (3), using fix in a DOD form. As has long been noted, if negative examples were systematically available to learners, then this problem would be solved, since the child would be given evidence that the DOD form is not possible in the adult grammar. However, although parental correction could be considered to be a source of negative evidence, it is neither systematic nor generally available to all children (Marcus, 1993). Even when it does occur, all careful studies have indicated that it seems mostly concerned with semantic appropriateness rather than syntax. In the cases where it is related to syntax, it is often difficult to determine what the correction refers to in the utterance and besides children seem to be oblivious to the correction (Brown and Hanlon, 1970; Ingram, 1989). One alternative solution to Baker’s paradox that has been widely discussed at least since Chomsky (1981) is the use of indirect negative evidence. On the indirect negative evidence model, if a verb is not found where it would be expected to occur, the learner may conclude it is not part of the adult grammar. Crucially, the indirect evidence model is inherently statistical. Different formalizations of indirect negative evidence have been incorporated in several computational learning models for learning e.g. grammars (Briscoe, 1997; Villavicencio, 2002; Kwiatkowski et al., 2010); dative verbs (Perfors, Tenenbaum, and Wonnacott, 2010; Hsu and Chater, 2010); and multiword verbs (Nematzadeh, Fazly, and Stevenson, 2013). Since a number of closely related models can all implement the indirect negative evidence approach, the decision of which one to choose for a given task may not be entirely clear. In this paper we compare a range of statistical models consistent with a certain behavior: early overgeneralization, with eventual convergence to the correct target on the basis of exposure to more data. 3 Materials and Methods 3.1 Dative Corpora To emulate a child language acquisition environment we use naturalistic longitudinal child-directed data, from the Brown corpus in CHILDES, for one child (Adam) for a subset of 19 verbs in the DOD and PD verb frames, figure 1. This dataset was originally reported in Perfors, Tenenbaum, and Wonnacott (2010), and longitudinal and incremental aspects to acquisition are approximated by dividing the data available into 5 incremental epochs (E1 to E5 in the figures), where at the final epoch the learner has seen the full corpus. Model comparison requires a gold standard database for acquisition, reporting which frames have been learned for which verbs at each stage, and how likely a child is of making creative uses of a particular verb in a new frame. An independent gold standard with developmental information (e.g. Gropen et al. (1989)) would clearly be ideal. Absent this, a first step is demonstrating that simpler alternative models can replicate HBM performance on their own terms. Therefore, the gold standard we use for evaluation is the classification predicted by Perfors, Tenenbaum, and Wonnacott (2010). The evaluations reported in our analysis take into account intrinsic characteristics of each model in relation to the likelihoods of the verbs, to determine the extent to which the models go beyond the data they were exposed to, discussed in section 2. Further, since it has been argued that very low frequency verbs may not yet be firmly placed in a child’s lexicon (Yang, 2010; Gropen et al., 1989), at each epoch we also impose a low-frequency threshold of 5 occurrences, considering only verbs that the learner has seen at least 5 times. This use of a low-frequency threshold for learning has extensive support in the literature for learning 1323 of all kinds in both human and non-human animals, e.g. (Gallistel, 2002). A cut-offfrequency in this range has also commonly been used in NLP tasks like POS tagging (Ratnaparkhi, 1999). 3.2 The learners We selected a set of representative statistical models that are capable in principle of solving this classification task, ranging from what is perhaps the simplest possible, a simple binomial, all the way to multi-level hierarchical Bayesian approaches. A Binomial distribution serves as the simplest model for capturing the behavior of a verb occurring in either DOD or PD frame. Representing the probability of DOD as θ, after n occurrences of the verb the probability that y of them are DOD is: p( y| θ, n) = n y ! θy (1 −θ)n−y (1) Considering that p(y| θ, n) is the likelihood in a Bayesian framework, the simplest and the most intuitive estimator of θ, given y in n verb occurrences, is the Maximum Likelihood Estimator (MLE): θMLE = y n (2) θMLE is viable as a learning model in the sense that its accuracy increases as the amount of evidence for a verb grows (n →∞), reflecting the incremental, on-line character of language learning. However, one well known limitation of MLE is that it assigns zero probability mass to unseen events. Ruling out events on the grounds that they did not occur in a finite data set early in learning may be too strong – though it should be noted that this is simply one (overly strong) version of the indirect negative evidence position. Again as is familiar, to overcome zero count problem, models adopt one or another method of smoothing to assign a small probability mass to unseen events. In a Bayesian formulation, this amounts to assigning nonzero probability mass to some set of priors; smoothing also captures the notion of generalization, making predictions about data that has never been seen by the learner. In the context of verb learning smoothing could be based on several principles: • an (innate) expectation as to how verbs in general should behave; • an acquired class-based expectation of the behavior of a verb, based on its association to similar but more frequent verbs. The former can be readily implemented in terms of prior probability estimates. As we discuss below, class-based estimates arise from one or another clustering method, and can produce more accurate estimates for less frequent verbs based on patterns already learned for more frequent verbs in the same class; see (Perfors, Tenenbaum, and Wonnacott, 2010). In this case, smoothing is a sideeffect of the behavior of a class as a whole. When learning begins, the prior probability is the only source of information for a learner and, as such, dominates the value of the posterior probability. However, in the large sample limit, it is the likelihood that dominates the posterior distribution regardless of the prior. In Hierarchical Bayesian Models both effects are naturally incorporated. The prior distribution is structured as a chain of distributions of parameters and hyper-parameters, and the data may be divided into classes that share some of the hyper-parameters, as defined below for the case of a three levels model: λ ∼ Exponential(1) µ ∼ Exponential(1) αk ∼ Exponential(λ) βk ∼ Beta(µ, µ) θik ∼ Beta(αkβk, αk(1 −βk)) yi|ni ∼ Binomial(θik) The indices refer to the possible hierarchies among the hyper-parameters. λ and µ are in the top, and they are shared by all verbs. Then there are classes of different αk, βk, and the probabilities for the DOD frame for the different verbs (θik) are drawn according to the classes k assigned to them. An estimate for (θik) for a given configuration of clusters is given by 1324 Figure 1: Verb tokens per epoch (E1 to E5) Figure 2: Verb tokens ≥5 per epoch (E1 to E5) where P(Y) is the evidence of the data, the unnormalized posterior for the hyperparameters is and the likelihood for α and β is The Hierarchical Bayesian Model prediction for θi is the average of the estimate θikHBM over all possible partitions of the verbs in the task. To simplify the notation we can write θHBM = E " y + αβ n + α # (3) where in the expression E[. . . ] are included the integrals described above and the average of all possible class partitions. Due to this complexity, in practice even small data sets require the use of MCMC methods, and statistical models for partitions, like CRP (Gelman et al., 2003; Perfors, Tenenbaum, and Wonnacott, 2010). This complexity also calls into question the cognitive fidelity of such approaches. Eq.3 is particularly interesting because by fixing α and β (instead of averaging over them) it is possible to deduce simpler (and classical) models: MLE corresponds to α = 0; the so called “add-one” smoothing (referred in this paper as L1) corresponds to α = 2 and β = 1/2. From Eq.3 it is also clear that if α and β (or their distributions) are unchanged, as the evidence of a verb grows (n →∞), the HBM estimate approaches MLE’s, (θHBM →θMLE). On the other hand, when α >> n, θHBM ∼β, so that β can be interpreted as a prior value for θ in the low frequency limit. Following this reasoning, we propose an alternative approach, a linear competition learner (LCL), that explicitly models the behavior of a given verb as the linear competition between the evidence for the verb, and the average behavior of verbs of the same class. As clustering is defined independently from parameter estimation, the advantages of the proposed approach are twofold. First, it is computationally much simpler, not requiring approximations by Monte Carlo methods. Second, differently from HBMs where the same attributes are used for clustering and parameter estimation (in this case the DOD and PD counts for each verb), in LCL cluster1325 ing may be done using more general contexts that employ a variety of linguistic and environmental attributes. For LCL the prior and class-based information are incorporated as: θLCL = yi + αCβC ni + αC (4) where αC and βC are defined via justifiable heuristic expressions dependent solely on the statistics of the class attributed to each verb i. The strength of the prior (αC) is a monotonic function of the number of elements (mC) in the class C, excluding the target verb vi. To approximate the gold standard behavior of the HBM for this task (Perfors, Tenenbaum, and Wonnacott, 2010) we chose the following function for αC: αC = mC3/2(1 −mC−1/5) + 0.1 (5) with the strength of the prior for the LCL model depending on the number of verbs in the class, not on their frequency. Eq.5 was chosen as a good fit to HBMs, without incurring their complexity. The powers are simple fractions, not arbitrary numbers. A best fit was not attempted due to the lack of assessment of how accurate HBMs are on real data. The prior value (βC) is a smoothed estimation of the probability of DOD in a given class, combining the evidence for all verbs in that class: βC = YC + 1/2 NC + 1 (6) in this case YC is the number of DOD occurrences in the class, and NC the total number of verb occurrences in the class, in both cases excluding the target verb vi. The interpretation of these parameters is as follows: βC is the estimate of θ in the absence of any data for a verb; and αC controls the crossover between this estimate and MLE, with a large αC requiring a larger sample (ni) to overcome the bias given by βC. For comparative purposes, in this paper we examine alternative models for (a) probability estimation and (b) clustering. The models are the following: • two models without clusters: MLE and L1; • two models where clusters are performed independently: LCL and MLEαβ; and • the full HBM described before. MLEαβ corresponds to replacing α, β in eq.3 by their maximal likelihood values calculated from P({yi, ni}i∈k|α, β) described before. For models without clustering, estimation is based solely on the observed behavior of verbs. With clustering, same-cluster verbs share some parameters, influencing one another. HBMs place distributions over possible clusters, with estimation derived from averages over distributions. In HBMs, clustering and probability estimation are calculated jointly. In the other models these two estimates are calculated separately, permitting ’plug-and-play’ use of external clustering methods, like X-means (Pelleg and Moore, 2000)1. However, to further assess the impact of cluster assignment on alternative model performance, we also used the clusters that maximize the evidence of the HBM for the DOD and PD counts of the target verbs, and we refer to these as Maximum Evidence (ME) clusters. In MWE clusters, verbs are separated into 3 classes: one if they have counts for both frames; another for only the DOD frame; and a final for only the PD frame. 4 Evaluation The learning task consists of estimating the probability that a given verb occurs in a particular frame, using previous occurrences as the basis for this estimation. In this context, overgeneralization can be viewed as the model’s predictions that a given verb seen only in one frame (say, a PD) can also occur in the other (say, a DOD) as well, and it decreases as the learner receives more data. In one extreme we have MLE, which does not overgeneralize, and in the other the L1 model, which assigns uniform probability for all unseen cases. The other 3 models fall somewhere in between, overgeneralizing beyond the observed data, using the prior and class-based smoothing to assign some (low) probability mass to an unseen verb-frame pair. The relevant models’ 1Other clustering algorithms were also used; here we report X-means results as representative of these models. X-means is available from http://www.cs. waikato.ac.nz/ml/weka/ 1326 predictions for each of the target verbs in the DOD frame, given the full corpus, are in figure 3. In either end of the figure are the verbs that were attested in only one of the frames (PD only at the left-hand end, and DOD only at the right-hand end). For these verbs, LCL and HBM exhibit similar behavior. When the low-frequency threshold is applied, MLEαβ, HBM and LCL work equally well, figure 4. Figure 4: Probability of verbs in DOD frame, Low Frequency Threshold. To examine how overgeneralization progresses during the course of learning as the models were exposed to increasing amounts of data, we used the corpus divided by cumulative epochs, as described in §3.1. For each epoch, verbs seen in only one of the frames were divided in 5 frequency bins, and the models were assessed as to how much overgeneralization they displayed for each of these verbs. Following Perfors, Tenenbaum, and Wonnacott (2010) overgeneralization is calculated as the absolute difference between the models predicted θ and θMLE, for each of the epochs, figure 5, and for comparative purposes their alternating/non-alternating classification is also adopted. For non-alternating verbs, overgeneralization reflects the degree of smoothing of each model. As expected, the more frequent a verb is, the more confident the model is in the indirect negative evidence it has for that verb, and the less it overgeneralizes, shown in the lighter bars in all epochs. In addition, the overall effect of larger amounts of data are indicated by a reduction in overgeneralization epoch by epoch. The effects of class-based smoothing can be assessed comparing L1, a model without clustering which displays a constant degree of overgeneralization regardless of the epoch, while HBM uses a distribution over clusters and the other models X-means. If a low-frequency threshold is applied, the differences between the models decrease significantly and so does the degree of overgeneralization in the models’ predictions, as shown in the 3 lighter bars in the figure. Figure 5: Overgeneralization, per epoch, per frequency bin, where 0.5 corresponds to the maximum overgeneralization. While the models differ somewhat in their predictions, the quantitative differences need to be assessed more carefully. To compare the models and provide an overall difference measure, we use the predictions of the more complex model, HBM, as a baseline and then calculate the difference between its predictions and those of the other models. We used three different measures for comparing models, one for their standard difference; one that prioritizes agreement for high frequency verbs; and one that focuses more on low frequency verbs. The first measure, denoted Difference, captures a direct comparison between two models, M1 and M2 as the average prediction difference among the verbs, and is defined as: This measure treats all differences uniformly, regardless of whether they relate to high or low frequency verbs in the learning sample (e.g. for bring with 150 counts and serve with only 1 have the same weight). To focus on high frequency verbs, we also define the Weighted Difference between two models as: Here we expect Dn < D since models tend to 1327 Figure 3: Probability of verbs in DOD frame. agree as the amount of evidence for each verb increases. Conversely, our third measure, denoted Inverted, prioritizes the agreement between two models on low frequency verbs, defined as follows: D1/n captures the degree of similarity in overgeneralization between two models. The results of applying these three difference measures are shown in figure 6 for the relevant models, where grey is for D(M1, M2), black for Dn(M1, M2) and white for D1/n(M1, M2). Given the probabilistic nature of Monte Carlo methods, there is also a variation between different runs of the HBM model (HBM to HBM2), and this indicates that models that perform within these bounds can be considered to be equivalent (e.g. HBMs and ME-MLEαβ for Weighted Difference, and the HBMs and X-MLEαβ for the Inverted Difference). Comparing the prediction agreement, the strong influence of clustering is clear: the models that have compatible clusters have similar performances. For instance, all the models that adopt the ME clusters for the data perform closest to HBMs. Moreover, the weighted differences tend to be smaller than 0.01 and around 0.02 for the inverted differences. The results for these measures become even closer in most cases when the low frequency threshold is adopted, figure 7, as the Figure 6: Model Comparisons. Figure 7: Model Comparison - Low Frequency Threshold. 0 5 10 15 20 25 30 35 40 45 50 0.5 0.6 0.7 0.8 0.9 1 number of examples DOD probability MLE L1 HBM LCL MLE L1 HBM LCL Figure 8: DOD probability evolution for models with increase in evidence evidence reduces the influence of the prior. To examine the decay of overgeneralization with the increase in evidence for these models, two simulated scenarios are defined for a single generic verb: one where the evidence for DOD amounts to 75% of the data (dashed lines) and in the other to 100% (solid lines), figures 9 and 8. Unsurprisingly, the performance of the models is dependent on the amount of evidence available. This is a consequence of the decrease in the influence of the priors as the sample size increases in a rate of 1/N, as shown in figure 9 for the decrease in overgeneralization. Ultimately it is the ev1328 10 0 10 1 10 2 10 −4 10 −3 10 −2 10 −1 10 0 number of examples overgeneralization L1 HBM LCL L1 HBM LCL Figure 9: Overgeneralization reduction with increase in evidence idence that dominates the posterior probability. Although the Bayesian model exhibits fast convergence, after 10 examples, the simpler model L1 is only approximately 3% below the Bayesian model in performance for scenario 1 and is still 90% accurate in scenario 2, figure 8. These results suggest that while these models all differ slightly in the degree of overgeneralization for low frequency data and noise, these differences are small, and as evidence reaches approximately 10 examples per verb, the overall performance for all models approaches that of MLE. 5 Conclusions and Future Work HBMs have been successfully used for a number of language acquisition tasks capturing both patterns of under- and overgeneralization found in child language acquisition. Their (hyper)parameters provide robustness for dealing with low frequency events, noise, and uncertainty and a good fit to the data, but this fidelity comes at the cost of complex computation. Here we have examined HBMs against computationally simpler approaches to dative alternation acquisition, which implement the indirect negative approach. We also advanced several measures for model comparison in order to quantify their agreement to assist in the task of model selection. The results show that the proposed LCL model, in particular, that combines class-based smoothing with maximum likelihood estimation, obtains results comparable to those of HBMs, in a much simpler framework. Moreover, when a cognitively-viable frequency threshold is adopted, differences in the performance of all models decrease, and quite rapidly approach the performance of MLE. In this paper we used standard clustering techniques grounded solely on verb counts to enable comparison with previous work. However, a variety of additional linguistic and distributional features could be used for clustering verbs into more semantically motivated classes, using a larger number of frames and verbs. This will be examined in future work. We also plan to investigate the use of clustering methods more targeted to language tasks (Sun and Korhonen, 2009). Acknowledgements We would like to thank the support of projects CAPES/COFECUB 707/11, CNPq 482520/2012-4, 478222/2011-4, 312184/20123, 551964/2011-1 and 312077/2012-2. We also want to thank Amy Perfors for kindly sharing the input data. References Baker, Carl L. 1979. Syntactic Theory and the Projection Problem. Linguistic Inquiry, 10(4):533– 581. Briscoe, Ted. 1997. Co-evolution of language and the language acquisition device. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL), pages 418–427. Morgan Kaufmann. Brown, Roger. 1973. A first language: Ehe early stages. Harvard University Press, Cambridge, Massachusetts. Brown, Roger and Camille Hanlon. 1970. Derivational complexity and the order of acquisition of child’s speech. In J. Hays, editor, Cognition and the Development of Language. NY: John Wiley. Chater, Nick, Joshua B. Tenenbaum, and Alan Yuille. 2006. Probabilistic models of cognition: where next? Trends in Cognitive Sciences, 10(7):292 – 293. Chomsky, Noam. 1981. Lectures on government and binding. Mouton de Gruyter. 1329 Gallistel, Charles R. 2002. Frequency, contingency, and the information processing theory of conditioning. In P.Sedlmeier and T. Betsch, editors, Frequency processing and cognition. Oxford University Press, pages 153–171. Gelman, Andrew, John B. Carlin, Hal S. Stern, and Donald B. Rubin. 2003. Bayesian Data Analysis, Second Edition (Chapman & Hall/CRC Texts in Statistical Science). Chapman and Hall/CRC, 2 edition. Gropen, Jess, Steve Pinker, Michael Hollander, Richard Goldberg, and Ronald Wilson. 1989. The learnability and acquisition of the dative alternation in English. Language, 65(2):203–257. Hsu, Anne S. and Nick Chater. 2010. The logical problem of language acquisition: A probabilistic perspective. Cognitive Science, 34(6):972– 1016. Ingram, David. 1989. First Language Acquisition: Method, Description and Explanation. Cambridge University Press. Jones, Matt and Bradley C. Love. 2011. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34(04):169–188. Kwiatkowski, Tom, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1223–1233. Kwisthout, Johan, Todd Wareham, and Iris van Rooij. 2011. Bayesian intractability is not an ailment that approximation can cure. Cognitive Science, 35(5):779–1007. Levin, B. 1993. English Verb Classes and Alternations: A Preliminary Investigation. University of Chicago Press, Chicago, IL. MacWhinney, Brian. 1995. The CHILDES project: tools for analyzing talk. Hillsdale, NJ: Lawrence Erlbaum Associates, second edition. Marcus, Gary F. 1993. Negative evidence in language acquisition. Cognition, 46:53–85. Marr, D. 1982. Vision. San Francisco, CA: W. H. Freeman. Nematzadeh, Aida, Afsaneh Fazly, and Suzanne Stevenson. 2013. Child acquisition of multiword verbs: A computational investigation. In A. Villavicencio, T. Poibeau, A. Korhonen, and A. Alishahi, editors, Cognitive Aspects of Computational Language Acquisition. Springer, pages 235–256. Parisien, Christopher, Afsaneh Fazly, and Suzanne Stevenson. 2008. An incremental bayesian model for learning syntactic categories. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, CoNLL ’08, pages 89–96, Stroudsburg, PA, USA. Association for Computational Linguistics. Parisien, Christopher and Suzanne Stevenson. 2010. Learning verb alternations in a usagebased bayesian model. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Pelleg, Dan and Andrew Moore. 2000. X-means: Extending k-means with efficient estimation of the number of clusters. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 727–734, San Francisco. Morgan Kaufmann. Perfors, Amy, Joshua B. Tenenbaum, and Elizabeth Wonnacott. 2010. Variability, negative evidence, and the acquisition of verb argument constructions. Journal of Child Language, (37):607–642. Ratnaparkhi, Adwait. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, pages 151–175. Shalizi, Cosma R. 2009. Dynamics of bayesian updating with dependent data and misspecified models. ElectroCosmanic Journal of Statistics, 3:1039–1074. Sun, Lin and Anna Korhonen. 2009. Improving verb clustering with automatically acquired selectional preferences. In EMNLP, pages 638– 647. Villavicencio, Aline. 2002. The Acquisition of a Unification-Based Generalised Categorial Grammar. Ph.D. thesis, Computer Laboratory, University of Cambridge. Wonnacott, Elizabeth, Elissa L. Newport, and Michael K. Tanenhaus. 2008. Acquiring and processing verb argument structure: Distributional learning in a miniature language. Cognitive Psychology, 56:165–209. Yang, Charles. 2010. Three factors in language variation. Lingua, 120:1160–1177. 1330
2013
130
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1331–1340, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Two Level Model for Context Sensitive Inference Rules Oren Melamud§, Jonathan Berant†, Ido Dagan§, Jacob Goldberger♦, Idan Szpektor‡ § Computer Science Department, Bar-Ilan University † Computer Science Department, Stanford University ♦Faculty of Engineering, Bar-Ilan University ‡ Yahoo! Research Israel {melamuo,dagan,goldbej}@{cs,cs,eng}.biu.ac.il [email protected] [email protected] Abstract Automatic acquisition of inference rules for predicates has been commonly addressed by computing distributional similarity between vectors of argument words, operating at the word space level. A recent line of work, which addresses context sensitivity of rules, represented contexts in a latent topic space and computed similarity over topic vectors. We propose a novel two-level model, which computes similarities between word-level vectors that are biased by topic-level context representations. Evaluations on a naturallydistributed dataset show that our model significantly outperforms prior word-level and topic-level models. We also release a first context-sensitive inference rule set. 1 Introduction Inference rules for predicates have been identified as an important component in semantic applications, such as Question Answering (QA) (Ravichandran and Hovy, 2002) and Information Extraction (IE) (Shinyama and Sekine, 2006). For example, the inference rule ‘X treat Y →X relieve Y’ can be useful to extract pairs of drugs and the illnesses which they relieve, or to answer a question like “Which drugs relieve headache?”. Along this vein, such inference rules constitute a crucial component in generic modeling of textual inference, under the Textual Entailment paradigm (Dagan et al., 2006; Dinu and Wang, 2009). Motivated by these needs, substantial research was devoted to automatic learning of inference rules from corpora, mostly in an unsupervised distributional setting. This research line was mainly initiated by the highly-cited DIRT algorithm (Lin and Pantel, 2001), which learns inference for binary predicates with two argument slots (like the rule in the example above). DIRT represents a predicate by two vectors, one for each of the argument slots, where the vector entries correspond to the argument words that occurred with the predicate in the corpus. Inference rules between pairs of predicates are then identified by measuring the similarity between their corresponding argument vectors. This general scheme was further enhanced in several directions, e.g. directional similarity (Bhagat et al., 2007; Szpektor and Dagan, 2008) and meta-classification over similarity values (Berant et al., 2011). Consequently, several knowledge resources of inference rules were released, containing the top scoring rules for each predicate (Schoenmackers et al., 2010; Berant et al., 2011; Nakashole et al., 2012). The above mentioned methods provide a single confidence score for each rule, which is based on the obtained degree of argument-vector similarities. Thus, a system that applies an inference rule to a text may estimate the validity of the rule application based on the pre-specified rule score. However, the validity of an inference rule may depend on the context in which it is applied, such as the context specified by the given predicate’s arguments. For example, ‘AT&T acquire TMobile →AT&T purchase T-Mobile’, is a valid application of the rule ‘X acquire Y →X purchase Y’, while ‘Children acquire skills →Children purchase skills’ is not. To address this issue, a line of works emerged which computes a contextsensitive reliability score for each rule application, based on the given context. The major trend in context-sensitive inference models utilizes latent or class-based methods for context modeling (Pantel et al., 2007; Szpektor et al., 2008; Ritter et al., 2010; Dinu and Lapata, 2010b). In particular, the more recent methods (Ritter et al., 2010; Dinu and Lapata, 2010b) modeled predicates in context as a probability distribution over topics learned by a Latent Dirichlet Allo1331 cation (LDA) model. Then, similarity is measured between the two topic distribution vectors corresponding to the two sides of the rule in the given context, yielding a context-sensitive score for each particular rule application. We notice at this point that while contextinsensitive methods represent predicates by argument vectors in the original fine-grained word space, context-sensitive methods represent them as vectors at the level of latent topics. This raises the question of whether such coarse-grained topic vectors might be less informative in determining the semantic similarity between the two predicates. To address this hypothesized caveat of prior context-sensitive rule scoring methods, we propose a novel generic scheme that integrates wordlevel and topic-level representations. Our scheme can be applied on top of any context-insensitive “base” similarity measure for rule learning, which operates at the word level, such as Cosine or Lin (Lin, 1998). Rather than computing a single context-insensitive rule score, we compute a distinct word-level similarity score for each topic in an LDA model. Then, when applying a rule in a given context, these different scores are weighed together based on the specific topic distribution under the given context. This way, we calculate similarity over vectors in the original word space, while biasing them towards the given context via a topic model. In order to promote replicability and equal-term comparison with our results, we based our experiments on publicly available datasets, both for unsupervised learning of the evaluated models and for testing them over a random sample of rule applications. We apply our two-level scheme over three state-of-the-art context-insensitive similarity measures. The evaluation compares performances both with the original context-insensitive measures and with recent LDA-based contextsensitive methods, showing consistent and robust advantages of our scheme. Finally, we release a context-sensitive rule resource comprising over 2,000 frequent verbs and one million rules. 2 Background and Model Setting This section presents components of prior work which are included in our model and experiments, setting the technical preliminaries for the rest of the paper. We first present context-insensitive rule learning, based on distributional similarity at the word level, and then context-sensitive scoring for rule applications, based on topic-level similarity. Some further discussion of related work appears in Section 6. 2.1 Context-insensitive Rule Learning A predicate inference rule ‘LHS →RHS’, such as ‘X acquire Y →X purchase Y’, specifies a directional inference relation between two predicates. Each rule side consists of a lexical predicate and (two) variable slots for its arguments.1 Different representations have been used to specify predicates and their argument slots, such as word lemma sequences, regular expressions and dependency parse fragments. A rule can be applied when its LHS matches a predicate with a pair of arguments in a text, allowing us to infer its RHS, with the corresponding instantiations for the argument variables. For example, given the text “AT&T acquires T-Mobile”, the above rule infers “AT&T purchases T-Mobile”. The DIRT algorithm (Lin and Pantel, 2001) follows the distributional similarity paradigm to learn predicate inference rules. For each predicate, DIRT represents each of its argument slots by an argument vector. We denote the two vectors of the X and Y slots of a predicate pred by vx pred and vy pred, respectively. Each entry of a vector v corresponds to a particular word (or term) w that instantiated the argument slot in a learning corpus, with a value v(w) = PMI(pred, w) (with PMI standing for point-wise mutual information). To learn inference rules, DIRT considers (in principle) each pair of binary predicates that occurred in the corpus for a candidate rule, ‘LHS →RHS’. Then, DIRT computes a reliability score for the rule by combining the measured similarities between the corresponding argument vectors of the two rule sides. Concretely, denoting by l and r the predicates appearing in the two rule sides, DIRT’s reliability score is defined as follows: (1) scoreDIRT(LHS →RHS) = q sim(vx l , vxr ) · sim(vy l , vy r) where sim(v, v′) is a vector similarity measure. Specifically, DIRT employs the Lin similarity 1We follow most of the inference-rule learning literature, which focused on binary predicates. However, our contextsensitive scheme can be applied to any arity. 1332 measure from (Lin, 1998), defined as follows: (2) Lin(v, v′) = P w∈v∩v′[v(w) + v′(w)] P w∈v∪v′[v(w) + v′(w)] We note that the general DIRT scheme may be used while employing other “base” vector similarity measures. For example, the Lin measure is symmetric, and thus using it would yield the same reliability score when swapping the two sides of a rule. This issue has been addressed in a separate line of research which introduced directional similarity measures suitable for inference relations (Bhagat et al., 2007; Szpektor and Dagan, 2008; Kotlerman et al., 2010). In our experiments we apply our proposed context-sensitive similarity scheme over three different base similarity measures. DIRT and similar context-insensitive inference methods provide a single reliability score for a learned inference rule, which aims to predict the validity of the rule’s applications. However, as exemplified in the Introduction, an inference rule may be valid in some contexts but invalid in others (e.g. acquiring entails purchasing for goods, but not for skills). Since vector similarity in DIRT is computed over the single aggregate argument vector, the obtained reliability score tends to be biased towards the dominant contexts of the involved predicates. For example, we may expect a higher score for ‘acquire →purchase’ than for ‘acquire →learn’, since the former matches a more frequent sense of acquire in a typical corpus. Following this observation, it is desired to obtain a context-sensitive reliability score for each rule application in a given context, as described next. 2.2 Context-sensitive Rule Applications To assess the reliability of applying an inference rule in a given context we need some model for context representation, that should affect the rule reliability score. A major trend in past work is to represent contexts in a reduced-dimensionality latent or class-based model. A couple of earlier works utilized a cluster-based model (Pantel et al., 2007) and an LSA-based model (Szpektor et al., 2008), in a selectional-preferences style approach. Several more recent works utilize a Latent Dirichlet Allocation (LDA) (Blei et al., 2003) framework. We now present an underlying unified view of the topic-level models in (Ritter et al., 2010; Dinu and Lapata, 2010b), which we follow in our own model and in comparative model evaluations. We note that a similar LDA model construction was employed also in (S´eaghdha, 2010), for estimating predicate-argument likelihood. First, an LDA model is constructed, as follows. Similar to the construction of argument vectors in the distributional model (described above in subsection 2.1), all arguments instantiating each predicate slot are extracted from a large learning corpus. Then, for each slot of each predicate, a pseudo-document is constructed containing the set of all argument words that instantiated this slot in the corpus. We denote the two documents constructed for the X and Y slots of a predicate pred by dx pred and dy pred, respectively. In comparison to the distributional model, these two documents correspond to the analogous argument vectors vx pred and vy pred, both containing exactly the same set of words. Next, an LDA model is learned from the set of all pseudo-documents, extracted for all predicates.2 The learning process results in the construction of K latent topics, where each topic t specifies a distribution over all words, denoted by p(w|t), and a topic distribution for each pseudodocument d, denoted by p(t|d). Within the LDA model we can derive the a-posteriori topic distribution conditioned on a particular word within a document, denoted by p(t|d, w) ∝p(w|t) · p(t|d). In the topic-level model, d corresponds to a predicate slot and w to a particular argument word instantiating this slot. Hence, p(t|d, w) is viewed as specifying the relevance (or likelihood) of the topic t for the predicate slot in the context of the given argument instantiation. For example, for the predicate slot ‘acquire Y’ in the context of the argument ‘IBM’, we expect high relevance for a topic about companies, while in the context of the argument ‘knowledge’ we expect high relevance for a topic about abstract concepts. Accordingly, the distribution p(t|d, w) over all topics provides a topic-level representation for a predicate slot in the context of a particular argument w. This representation is used by the topic-level model to compute a context-sensitive score for inference rule applications, as follows. 2We note that there are variants in the type of LDA model and the way the pseudo-documents are constructed in the referenced prior work. In order to focus on the inference methods rather than on the underlying LDA model, we use the LDA framework described in this paper for all compared methods. 1333 Consider the application of an inference rule ‘LHS →RHS’ in the context of a particular pair of arguments for the X and Y slots, denoted by wx and wy, respectively. Denoting by l and r the predicates appearing in the two rule sides, the reliability score of the topic-level model is defined as follows (we present a geometric mean formulation for consistency with DIRT): (3) scoreTopic(LHS →RHS, wx, wy) = q sim(dx l , dxr, wx) · sim(dy l , dy r, wy) where sim(d, d′, w) is a topic-distribution similarity measure conditioned on a given context word. Specifically, Ritter et al. (2010) utilized the dot product form for their similarity measure: (4) simDC(d, d′, w) = Σt[p(t|d, w) · p(t|d′, w)] (the subscript DC stands for double-conditioning, as both distributions are conditioned on the argument word, unlike the measure below). Dinu and Lapata (2010b) presented a slightly different similarity measure for topic distributions that performed better in their setting as well as in a related later paper on context-sensitive scoring of lexical similarity (Dinu and Lapata, 2010a). In this measure, the topic distribution for the right hand side of the rule is not conditioned on w: (5) simSC(d, d′, w) = Σt[p(t|d, w) · p(t|d′)] (the subscript SC stands for single-conditioning, as only the left distribution is conditioned on the argument word). They also experimented with a few variants for the structure of the similarity measure and assessed that best results are obtained with the dot product form. In our experiments, we employ these two similarity measures for topic distributions as baselines representing topic-level models. Comparing the context-insensitive and contextsensitive models, we see that both of them measure similarity between vector representations of corresponding predicate slots. However, while DIRT computes sim(v, v′) over vectors in the original word-level space, topic-level models compute sim(d, d′, w) by measuring similarity of vectors in a reduced-dimensionality latent space. As conjectured in the introduction, such coarse-grain representation might lead to loss of information. Hence, in the next section we propose a combined two-level model, which represents predicate slots in the original word-level space while biasing the similarity measure through topic-level context models. 3 Two-level Context-sensitive Inference Our model follows the general DIRT scheme while extending it to handle context-sensitive scoring of rule applications, addressing the scenario dealt by the context-sensitive topic models. In particular, we define the context-sensitive score scoreWT, where WT stands for the combination of the Word/Topic levels: (6) scoreWT(LHS →RHS, wx, wy) = q sim(vx l , vxr , wx) · sim(vy l , vy r, wy) Thus, our model computes similarity over wordlevel (rather than topic-level) argument vectors, while biasing it according to the specific argument words in the given rule application context. The core of our contribution is thus defining the context-sensitive word-level vector similarity measure sim(v, v′, w), as described in the remainder of this section. Following the methods in Section 2, for each predicate pred we construct, from the learning corpus, its argument vectors vx pred and vy pred as well as its argument pseudo-documents dx pred and dy pred. For convenience, when referring to an argument vector v, we will denote the corresponding pseudo-document by dv. Based on all pseudodocuments we learn an LDA model and obtain its associated probability distributions. The calculation of sim(v, v′, w) is composed of two steps. At learning time, we compute for each candidate rule a separate, topic-biased, similarity score per each of the topics in the LDA model. Then, at rule application time, we compute an overall reliability score for the rule by combining the per-topic similarity scores, while biasing the score combination according to the given context of w. These two steps are described in the following two subsections. 3.1 Topic-biased Word-vector Similarities Given a pair of word vectors v and v′, and any desired “base” vector similarity measure sim (e.g. simLin), we compute a topic-biased similarity score for each LDA topic t, denoted by simt(v, v′). simt(v, v′) is computed by applying 1334 the original similarity measure over topic-biased versions of v and v′, denoted by vt and v′ t: simt(v, v′) = sim(vt, v′ t) where vt(w) = v(w) · p(t|dv, w) That is, each value in the biased vector, vt(w), is obtained by weighing the original value v(w) by the relevance of the topic t to the argument word w within dv. This way, rather than replacing altogether the word-level values v(w) by the topic probabilities p(t|dv, w), as done in the topiclevel models, we use the latter to only bias the former while preserving fine-grained word-level representations. The notation Lint denotes the simt measure when applied using Lin as the base similarity measure sim. This learning process results in K different topic-biased similarity scores for each candidate rule, where K is the number of LDA topics. Table 1 illustrates topic-biased similarities for the Y slot of two rules involving the predicate ‘acquire’. As can be seen, the topic-biased score Lint for ‘acquire →learn’ for t2 is higher than the Lin score, since this topic is characterized by arguments that commonly appear with both predicates of the rule. Consequently, the two predicates are found to be distributionally similar when biased for this topic. On the other hand, the topic-biased similarity for t1 is substantially lower, since prominent words in this topic are likely to occur with ‘acquire’ but not with ‘learn’, yielding low distributional similarity. Opposite behavior is exhibited for the rule ‘acquire →purchase’. 3.2 Context-sensitive Similarity When applying an inference rule, we compute for each slot its context-sensitive similarity score simWT(v, v′, w), where v and v′ are the slot’s argument vectors for the two rule sides and w is the word instantiating the slot in the given rule application. This score is computed as a weighted average of the rule’s K topic-biased similarity scores simt. In this average, each topic is weighed by its “relevance” for the context in which the rule is applied, which consists of the left-hand-side predicate v and the argument w. This relevance is capTopic t1 t2 Top 5 words calbiochem rights corel syndrome networks majority viacom knowledge financially skill acquire →learn Lint(v, v′) 0.040 0.334 Lin(v, v′) 0.165 acquire →purchase Lint(v, v′) 0.427 0.241 Lin(v, v′) 0.267 Table 1: Two characteristic topics for the Y slot of ‘acquire’, along with their topic-biased Lin similarities scores Lint, compared with the original Lin similarity, for two rules. The relevance of each topic to different arguments of ‘acquire’ is illustrated by showing the top 5 words in the argument vector vy acquire for which the illustrated topic is the most likely one. tured by p(t|dv, w): simWT(v, v′, w) = X t [p(t|dv, w) · simt(v, v′)] (7) This way, a rule application would obtain a high score only if the current context fits those topics for which the rule is indeed likely to be valid, as captured by a high topic-biased similarity. The notation LinWT denotes the simWT measure, when using Lint as the topic-biased similarity measure. Table 2 illustrates the calculation of contextsensitive similarity scores in four rule applications, involving the Y slot of the predicate ‘acquire’. We observe that relative to the fixed context-insensitive Lin score, the score of ‘acquire →learn’ is substantially promoted for the argument ‘skill’ while being demoted for ‘Skype’. The opposite behavior is observed for ‘acquire →purchase’, altogether demonstrating how our model successfully biases the similarity score according to rule validity in context. 4 Experimental Settings To evaluate our model, we compare it both to context-insensitive similarity measures as well as to prior context-sensitive methods. Furthermore, to better understand its applicability in typical NLP tasks, we focus on an evaluation setting that corresponds to a natural distribution of examples from a large corpus. 1335 Topic t1 t2 Top 5 words calbiochem rights corel syndrome networks majority viacom knowledge financially skill ‘acquire Skype →learn Skype’ p(t|dv, w) 0.974 0.000 Lint(v, v′) 0.040 0.334 LinWT(v, v′, w) 0.039 Lin(v, v′) 0.165 ‘acquire Skype →purchase Skype’ p(t|dv, w) 0.974 0.000 Lint(v, v′) 0.427 0.241 LinWT(v, v′, w) 0.417 Lin(v, v′) 0.267 ‘acquire skill →learn skill’ p(t|dv, w) 0.000 0.380 Lint(v, v′) 0.040 0.334 LinWT(v, v′, w) 0.251 Lin(v, v′) 0.165 ‘acquire skill →purchase skill’ p(t|dv, w) 0.000 0.380 Lint(v, v′) 0.427 0.241 LinWT(v, v′, w) 0.181 Lin(v, v′) 0.267 Table 2: Context-sensitive similarity scores (in bold) for the Y slots of four rule applications. The components of the score calculation are shown for the topics of Table 1. For each rule application, the table shows a couple of the topic-biased scores Lint of the rule (as in Table 1), along with the topic relevance for the given context p(t|dv, w), which weighs the topic-biased scores in the LinWT calculation. The context-insensitive Lin score is shown for comparison. 4.1 Evaluated Rule Application Methods We evaluated the following rule application methods: the original context-insensitive word model, following DIRT (Lin and Pantel, 2001), as described in Equation 1, denoted by CI; our own topic-word context-sensitive model, as described in Equation 6, denoted by WT. In addition, we evaluated two variants of the topic-level contextsensitive model, denoted DC and SC. DC follows the double conditioned contextualized similarity measure according to Equation 4, as implemented by (Ritter et al., 2010), while SC follows the single conditioned one at Equation 5, as implemented by (Dinu and Lapata, 2010b; Dinu and Lapata, 2010a). Since our model can contextualize various distributional similarity measures, we evaluated the performance of all the above methods on several base similarity measures and their learned rulesets, namely Lin (Lin, 1998), BInc (Szpektor and Dagan, 2008) and vector Cosine similarity. The Lin similarity measure is described in Equation 2. Binc (Szpektor and Dagan, 2008) is a directional similarity measure between word vectors, which outperformed Lin for predicate inference (Szpektor and Dagan, 2008). To build the rule-sets and models for the tested approaches we utilized the ReVerb corpus (Fader et al., 2011), a large scale publicly available webbased open extractions data set, containing about 15 million unique template extractions.3 ReVerb template extractions/instantiations are in the form of a tuple (x, pred, y), containing pred, a verb predicate, x, the argument instantiation of the template’s slot X, and y, the instantiation of the template’s slot Y . ReVerb includes over 600,000 different templates that comprise a verb but may also include other words, for example ‘X can accommodate up to Y’. Yet, many of these templates share a similar meaning, e.g. ‘X accommodate up to Y’, ‘X can accommodate up to Y’, ‘X will accommodate up to Y’, etc. Following Sekine (2005), we clustered templates that share their main verb predicate in order to scale down the number of different predicates in the corpus and collect richer word cooccurrence statistics per predicate. Next, we applied some clean-up preprocessing to the ReVerb extractions. This includes discarding stop words, rare words and non-alphabetical words instantiating either the X or the Y arguments. In addition, we discarded all predicates that co-occur with less than 100 unique argument words in each slot. The remaining corpus consists of 7 million unique extractions and 2,155 verb predicates. Finally, we trained an LDA model, as described in Section 2, using Mallet (McCallum, 2002). Then, for each original context-insensitive similarity measure, we learned from ReVerb a rule-set comprised of the top 500 rules for every identified predicate. To complete the learning, we calculated the topic-biased similarity score for each learned rule under each LDA topic, as specified in our context-sensitive model. We release a rule set comprising the top 500 context-sensitive rules that we learned for each of the verb predicates in our learning corpus, along with our trained LDA 3ReVerb is available at http://reverb.cs. washington.edu/ 1336 Method Lin BInc Cosine Valid 266 254 272 Invalid 545 523 539 Total 811 777 811 Table 3: Sizes of rule application test set for each learned rule-set. model.4 4.2 Evaluation Task To evaluate the performance of the different methods we chose the dataset constructed by Zeichner et al. (2012). 5 This publicly available dataset contains about 6,500 manually annotated predicate template rule applications, each one labeled as correct or incorrect. For example, ‘Jack agree with Jill ↛Jack feel sorry for Jill’ is a rule application in this dataset, labeled as incorrect, and ‘Registration open this month →Registration begin this month’ is another rule application, labeled as correct. Rule applications were generated by randomly sampling extractions from ReVerb, such as (‘Jack’,‘agree with’,‘Jill’) and then sampling possible rules for each, such as ‘agree with →feel sorry for’. Hence, this dataset provides naturally distributed rule inferences with respect to ReVerb. Whenever we evaluated a distributional similarity measure (namely Lin, BInc, or Cosine), we discarded instances from Zeichner et al.’s dataset in which the assessed rule is not in the contextinsensitive rule-set learned for this measure or the argument instantiation of the rule is not in the LDA lexicon. We refer to the remaining instances as the test set per measure, e.g. Lin’s test set. Table 3 details the size of each such test set in our experiment. Finally, the task under which we assessed the tested models is to rank all rule applications in each test set, aiming to rank the valid rule applications above the invalid ones. 5 Results We evaluated the performance of each tested method by measuring Mean Average Precision (MAP) (Manning et al., 2008) of the rule application ranking computed by this method. In order 4Our resource is available at: http://www.cs.biu. ac.il/˜nlp/downloads/wt-rules.html 5The dataset is available at: http:// www.cs.biu.ac.il/˜nlp/downloads/ annotation-rule-application.htm Method Lin BInc Cosine CI 0.503 0.513 0.513 DC 0.451 (1200) 0.455 (1200) 0.455 (1200) SC 0.443 (1200) 0.458 (1200) 0.452 (1200) WT 0.562 (100) 0.584 (50) 0.565 (25) Table 4: MAP values on corresponding test set obtained by each method. Figures in parentheses indicate optimal number of LDA topics. to compute MAP values and corresponding statistical significance, we randomly split each test set into 30 subsets. For each method we computed Average Precision on every subset and then took the average over all subsets as the MAP value. Since all tested context-sensitive approaches are based on LDA topics, we varied for each method the number of LDA topics K that optimizes its performance, ranging from 25 to 1600 topics. We used LDA hyperparameters β = 0.01 and α = 0.1 for K < 600 and α = 50 K for K >= 600. Table 4 presents the optimal MAP performance of each tested measure. Our main result is that our model outperforms all other methods, both context-insensitive and context-sensitive, by a relative increase of more than 10% for all three similarity measures that we tested. This improvement is statistically significant at p < 0.01 for BInc and Lin, and p < 0.015 for Cosine, using paired ttest. This shows that our model indeed successfully leverages contextual information beyond the basic context-agnostic rule scores and is robust across measures. Surprisingly, both baseline topic-level contextsensitive methods, namely DC and SC, underperformed compared to their context-insensitive baselines. While Dinu and Lapata (Dinu and Lapata, 2010b) did show improvement over contextinsensitive DIRT, this result was obtained on the verbs of the Lexical Substitution Task in SemEval (McCarthy and Navigli, 2007), which was manually created with a bias for context-sensitive substitutions. However, our result suggests that topiclevel models might not be robust enough when applied to a random sample of inferences. An interesting indication of the differences between our word-topic model, WT, and topic-only models, DC and SC, lies in the optimal number of LDA topics required for each method. The number of topics in the range 25-100 performed almost equally well under the WT model for all base measures, with a moderate decline for higher numbers. 1337 The need for this rather small number of topics is due to the nature of utilization of topics in WT. Specifically, topics are leveraged for high-level domain disambiguation, while fine grained wordlevel distributional similarity is computed for each rule under each such domain. This works best for a relatively low number of topics. However, in higher numbers, topics relate to narrower domains and then topic biased word level similarity may become less effective due to potential sparseness. On the other hand, DC and SC rely on topics as a surrogate to predicate-argument co-occurrence features, and thus require a relatively large number of them to be effective. Delving deeper into our test-set, Zeichner et al. provided a more detailed annotation for each invalid rule application. Specifically, they annotated whether the context under which the rule is applied is valid. For example, in ‘John bought my car ↛John sold my car’ the inference is invalid due to an inherently incorrect rule, but the context is valid. On the other hand in ‘my boss raised my salary ↛my boss constructed my salary’ the context {‘my boss’, ‘my salary’} for applying ‘raise →construct’ is invalid. Following, we split the test-set for the base Lin measure into two testsets: (a) test-setvc, which includes all correct rule applications and incorrect ones only under valid contexts, and (b) test-setivc, which includes again all correct rule applications but incorrect ones only under invalid contexts. Table 5 presents the performance of each compared method on the two test sets. On testsetivc, where context mismatches are abundant, our model outperformed all other baselines (statistically significant at p < 0.01). In addition, this time DC slightly outperformed CI. This result more explicitly shows the advantages of integrating word-level and context-sensitive topiclevel similarities for differentiating valid and invalid contexts for rule applications. Yet, many invalid rule applications occur under valid contexts due to inherently incorrect rules, and we want to make sure that also in this scenario our model does not fall behind the context-insensitive measure. Indeed, on test-setvc, in which context mismatches are rare, our algorithm is still better than the original measure, indicating that WT can be safely applied to distributional similarity measures without concerns of reduced performance in different context scenarios. test-setivc test-setvc Size (valid:invalid) 432 (266:166) 645 (266:379) CI 0.780 0.587 DC 0.796 0.498 SC 0.779 0.512 WT 0.854 0.621 Table 5: MAP results for the two split Lin testsets. 6 Discussion and Future Work This paper addressed the problem of computing context-sensitive reliability scores for predicate inference rules. In particular, we proposed a novel scheme that applies over any base distributional similarity measure which operates at the word level, and computes a single context-insensitive score for a rule. Based on such a measure, our scheme constructs a context-sensitive similarity measure that computes a reliability score for predicate inference rules applications in the context of given arguments. The contextualization of the base similarity score was obtained using a topic-level LDA model, which was used in a novel way. First, it provides a topic bias for learning separate pertopic word-level similarity scores between predicates. Then, given a specific candidate rule application, the LDA model is used to infer the topic distribution relevant to the context specified by the given arguments. Finally, the contextsensitive rule application score is computed as a weighted average of the per-topic word-level similarity scores, which are weighed according to the inferred topic distribution. While most works on context-insensitive predicate inference rules, such as DIRT (Lin and Pantel, 2001), are based on word-level similarity measures, almost all prior models addressing contextsensitive predicate inference rules are based on topic models (except for (Pantel et al., 2007), which was outperformed by later models). We therefore focused on comparing the performance of our two-level scheme with state-of-the-art prior topic-level and word-level models of distributional similarity, over a random sample of inference rule applications. Under this natural setting, the twolevel scheme consistently outperformed both types of models when tested with three different base similarity measures. Notably, our model shows stable performance over a large subset of the data 1338 where context sensitivity is rare, while topic-level models tend to underperform in such cases compared to the base context-insensitive methods. Our work is closely related to another research line that addresses lexical similarity and substitution scenarios in context. While we focus on lexical-syntactic predicate templates and instantiations of their argument slots as context, lexical similarity methods consider various lexical units that are not necessarily predicates, with their context typically being the collection of words in a window around them. Various approaches have been proposed to address lexical similarity. A number of works are based on a compositional semantics approach, where a prior representation of a target lexical unit is composed with the representations of words in its given context (Mitchell and Lapata, 2008; Erk and Pad´o, 2008; Thater et al., 2010). Other works (Erk and Pad´o, 2010; Reisinger and Mooney, 2010) use a rather large word window around target words and compute similarities between clusters comprising instances of word windows. In addition, (Dinu and Lapata, 2010a) adapted the predicate inference topic model from (Dinu and Lapata, 2010b) to compute lexical similarity in context. A natural extension of our work would be to extend our two level model to accommodate contextsensitive lexical similarity. For this purpose we will need to redefine the scope of context in our model, and adapt our method to compute contextbiased lexical similarities accordingly. Then we will also be able to evaluate our model on the Lexical Substitution Task (McCarthy and Navigli, 2007), which has been commonly used in recent years as a benchmark for context-sensitive lexical similarity models. In a different NLP task, Eidelman et al. (2012) utilize a similar approach to ours for improving the performance of statistical machine translation (SMT). They learn an LDA model on the source language side of the training corpus with the purpose of identifying implicit sub-domains. Then they utilize the distribution over topics inferred for each document in their corpus to compute separate per-topic translation probability tables. Finally, they train a classifier to translate a given target word based on these tables and the inferred topic distribution of the given document in which the target word appears. A notable difference between our approach and theirs is that we use predicate pseudo-documents consisting of argument instantiations to learn our LDA model, while Eidelman et al. use the real documents in a corpus. We believe that combining these two approaches may improve performance for both textual inference and SMT and plan to experiment with this direction in future work. Acknowledgments This work was partially supported by the Israeli Ministry of Science and Technology grant 3-8705, the Israel Science Foundation grant 880/12, and the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 287923 (EXCITEMENT). References Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In ACL. Rahul Bhagat, Patrick Pantel, Eduard Hovy, and Marina Rey. 2007. Ledir: An unsupervised algorithm for learning directionality of inference rules. In Proceedings of EMNLP-CoNLL. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Lecture Notes in Computer Science, volume 3944, pages 177–190. Georgiana Dinu and Mirella Lapata. 2010a. Measuring distributional similarity in context. In Proceedings of EMNLP. Georgiana Dinu and Mirella Lapata. 2010b. Topic models for meaning similarity in context. In Proceedings of COLING: Posters. Georgiana Dinu and Rui Wang. 2009. Inference rules and their application to recognizing textual entailment. In Proceedings EACL. Vladimir Eidelman, Jordan Boyd-Graber, and Philip Resnik. 2012. Topic models for dynamic translation model adaptation. In Proceedings of the ACL conference short papers. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of EMNLP. Katrin Erk and Sebastian Pad´o. 2010. Exemplar-based models for word meaning in context. In Proceedings of the ACL conference short papers. 1339 Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of EMNLP. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(4):359–389. Dekang Lin and Patrick Pantel. 2001. DIRT – discovery of inference rules from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2001. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL. Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to information retrieval, volume 1. Cambridge University Press Cambridge. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. Diana McCarthy and Roberto Navigli. 2007. Semeval2007 task 10: English lexical substitution task. In Proceedings of SemEval. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: A taxonomy of relational patterns with semantic types. EMNLP12. Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard Hovy. 2007. ISP: Learning inferential selectional preferences. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of ACL. Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Alan Ritter, Oren Etzioni, et al. 2010. A latent dirichlet allocation method for selectional preferences. In Proceedings of ACL. Stefan Schoenmackers, Jesse Davis, Oren Etzioni, and Daniel Weld. 2010. Learning first-order horn clauses from web text. In Proceedings of EMNLP. Diarmuid O S´eaghdha. 2010. Latent variable models of selectional preference. In Proceedings of ACL. Satoshi Sekine. 2005. Automatic paraphrase discovery based on context and keywords between ne pairs. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. Idan Szpektor and Ido Dagan. 2008. Learning entailment rules for unary templates. In Proceedings of COLING. Idan Szpektor, Ido Dagan, Roy Bar-Haim, and Jacob Goldberger. 2008. Contextual preferences. In Proceedings of ACL-08: HLT. Stefan Thater, Hagen F¨urstenau, and Manfred Pinkal. 2010. Contextualizing semantic representations using syntactically enriched vector models. In Proceedings of ACL. Naomi Zeichner, Jonathan Berant, and Ido Dagan. 2012. Crowdsourcing inference-rule evaluation. In Proceedings of ACL (short papers). 1340
2013
131
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1341–1351, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity Mohammad Taher Pilehvar, David Jurgens and Roberto Navigli Department of Computer Science Sapienza University of Rome {pilehvar,jurgens,navigli}@di.uniroma1.it Abstract Semantic similarity is an essential component of many Natural Language Processing applications. However, prior methods for computing semantic similarity often operate at different levels, e.g., single words or entire documents, which requires adapting the method for each data type. We present a unified approach to semantic similarity that operates at multiple levels, all the way from comparing word senses to comparing text documents. Our method leverages a common probabilistic representation over word senses in order to compare different types of linguistic data. This unified representation shows state-ofthe-art performance on three tasks: semantic textual similarity, word similarity, and word sense coarsening. 1 Introduction Semantic similarity is a core technique for many topics in Natural Language Processing such as Textual Entailment (Berant et al., 2012), Semantic Role Labeling (F¨urstenau and Lapata, 2012), and Question Answering (Surdeanu et al., 2011). For example, textual similarity enables relevant documents to be identified for information retrieval (Hliaoutakis et al., 2006), while identifying similar words enables tasks such as paraphrasing (Glickman and Dagan, 2003), lexical substitution (McCarthy and Navigli, 2009), lexical simplification (Biran et al., 2011), and Web search result clustering (Di Marco and Navigli, 2013). Approaches to semantic similarity have often operated at separate levels: methods for word similarity are rarely applied to documents or even single sentences (Budanitsky and Hirst, 2006; Radinsky et al., 2011; Halawi et al., 2012), while document-based similarity methods require more linguistic features, which often makes them inapplicable at the word or microtext level (Salton et al., 1975; Maguitman et al., 2005; Elsayed et al., 2008; Turney and Pantel, 2010). Despite the potential advantages, few approaches to semantic similarity operate at the sense level due to the challenge in sense-tagging text (Navigli, 2009); for example, none of the top four systems in the recent SemEval-2012 task on textual similarity compared semantic representations that incorporated sense information (Agirre et al., 2012). We propose a unified approach to semantic similarity across multiple representation levels from senses to documents, which offers two significant advantages. First, the method is applicable independently of the input type, which enables meaningful similarity comparisons across different scales of text or lexical levels. Second, by operating at the sense level, a unified approach is able to identify the semantic similarities that exist independently of the text’s lexical forms and any semantic ambiguity therein. For example, consider the sentences: t1. A manager fired the worker. t2. An employee was terminated from work by his boss. A surface-based approach would label the sentences as dissimilar due to the minimal lexical overlap. However, a sense-based representation enables detection of the similarity between the meanings of the words, e.g., fire and terminate. Indeed, an accurate, sense-based representation is essential for cases where different words are used to convey the same meaning. The contributions of this paper are threefold. First, we propose a new unified representation of the meaning of an arbitrarily-sized piece of text, referred to as a lexical item, using a sense-based probability distribution. Second, we propose a novel alignment-based method for word sense dis1341 ambiguation during semantic comparison. Third, we demonstrate that this single representation can achieve state-of-the-art performance on three similarity tasks, each operating at a different lexical level: (1) surpassing the highest scores on the SemEval-2012 task on textual similarity (Agirre et al., 2012) that compares sentences, (2) achieving a near-perfect performance on the TOEFL synonym selection task proposed by Landauer and Dumais (1997), which measures word pair similarity, and also obtaining state-of-the-art performance in terms of the correlation with human judgments on the RG-65 dataset (Rubenstein and Goodenough, 1965), and finally (3) surpassing the performance of Snow et al. (2007) in a sensecoarsening task that measures sense similarity. 2 A Unified Semantic Representation We propose a representation of any lexical item as a distribution over a set of word senses, referred to as the item’s semantic signature. We begin with a formal description of the representation at the sense level (Section 2.1). Following this, we describe our alignment-based disambiguation algorithm which enables us to produce sense-based semantic signatures for those lexical items (e.g., words or sentences) which are not sense annotated (Section 2.2). Finally, we propose three methods for comparing these signatures (Section 2.3). As our sense inventory, we use WordNet 3.0 (Fellbaum, 1998). 2.1 Semantic Signatures The WordNet ontology provides a rich network structure of semantic relatedness, connecting senses directly with their hypernyms, and providing information on semantically similar senses by virtue of their nearby locality in the network. Given a particular node (sense) in the network, repeated random walks beginning at that node will produce a frequency distribution over the nodes in the graph visited during the walk. To extend beyond a single sense, the random walk may be initialized and restarted from a set of senses (seed nodes), rather than just one; this multi-seed walk produces a multinomial distribution over all the senses in WordNet with higher probability assigned to senses that are frequently visited from the seeds. Prior work has demonstrated that multinomials generated from random walks over WordNet can be successfully applied to linguistic tasks such as word similarity (Hughes and Ramage, 2007; Agirre et al., 2009), paraphrase recognition, textual entailment (Ramage et al., 2009), and pseudoword generation (Pilehvar and Navigli, 2013). Formally, we define the semantic signature of a lexical item as the multinomial distribution generated from the random walks over WordNet 3.0 where the set of seed nodes is the set of senses present in the item. This representation encompasses both when the item is itself a single sense and when the item is a sense-tagged sentence. To construct each semantic signature, we use the iterative method for calculating topic-sensitive PageRank (Haveliwala, 2002). Let M be the adjacency matrix for the WordNet network, where edges connect senses according to the relations defined in WordNet (e.g., hypernymy and meronymy). We further enrich M by connecting a sense with all the other senses that appear in its disambiguated gloss.1 Let ⃗v(0) denote the probability distribution for the starting location of the random walker in the network. Given the set of senses S in a lexical item, the probability mass of ⃗v(0) is uniformly distributed across the senses si ∈S, with the mass for all sj /∈S set to zero. The PageRank may then be computed using: ⃗v (t) = (1 −α) M ⃗v (t−1) + α ⃗v (0) (1) where at each iteration the random walker may jump to any node si ∈S with probability α/|S|. We follow standard convention and set α to 0.15. We repeat the operation in Eq. 1 for 30 iterations, which is sufficient for the distribution to converge. The resulting probability vector ⃗v(t) is the semantic signature of the lexical item, as it has aggregated its senses’ similarities over the entire graph. For our semantic signatures we used the UKB2 off-the-shelf implementation of topicsensitive PageRank. 2.2 Alignment-Based Disambiguation Commonly, semantic comparisons are between word pairs or sentence pairs that do not have their lexical content sense-annotated, despite the potential utility of sense annotation in making semantic comparisons. However, traditional forms of word sense disambiguation are difficult for short texts and single words because little or no contextual information is present to perform the disambiguation task. Therefore, we propose a novel 1http://wordnet.princeton.edu 2http://ixa2.si.ehu.es/ukb/ 1342 Figure 1: (a) Example alignments of the first sense of term manager (in sentence t1) to the two first senses of the word types in sentence t2, along with the similarity of the two senses’ semantic signatures; (b) Alignments which maximize the similarities across words in t1 and t2 (the source side of an alignment is taken as the disambiguated sense of its corresponding word). alignment-based sense disambiguation that leverages the content of the paired item in order to disambiguate each element. Leveraging the paired item enables our approach to disambiguate where traditional sense disambiguation methods can not due to insufficient context. We view sense disambiguation as an alignment problem. Given two arbitrarily ordered texts, we seek the semantic alignment that maximizes the similarity of the senses of the context words in both texts. To find this maximum we use an alignment procedure which, for each word type wi in item T1, assigns wi to the sense that has the maximal similarity to any sense of the word types in the compared text T2. Algorithm 1 formalizes the alignment process, which produces a sense disambiguated representation as a result. Senses are compared in terms of their semantic signatures, denoted as function R. We consider multiple definitions of R, defined later in Section 2.3. As a part of the disambiguation procedure, we leverage the one sense per discourse heuristic of Yarowsky (1995); given all the word types in two compared lexical items, each type is assigned a single sense, even if it is used multiple times. Additionally, if the same word type appears in both sentences, both will always be mapped to the same sense. Although such a sense assignment is potentially incorrect, assigning both types to the same sense results in a representation that does no worse than a surface-level comparison. We illustrate the alignment-based disambiguation procedure using the two example sentences t1 and t2 given in Section 1. Figure 1(a) illustrates example alignments of the first sense of manager to the first two senses of the word types in sentence t2 along with the similarity of the two senses’ Algorithm 1 Alignment-based Sense Disambiguation Input: T1 and T2, the sets of word types being compared Output: P, the set of disambiguated senses for T1 1: P ←∅ 2: for each token ti ∈T1 3: max sim ←0 4: best si ←null 5: for each token tj ∈T2 6: for each si ∈Senses(ti), sj ∈Senses(tj) 7: sim ←R(si, sj) 8: if sim > max sim then 9: max sim = sim 10: best si = si 11: P ←P ∪{best si} 12: return P semantic signatures. For the senses of manager, sense manager1 n obtains the maximal similarity value to boss1 n among all the possible pairings of the senses for the word types in sentence t2, and as a result is selected as the sense labeling for manager in sentence t1.3 Figure 1(b) shows the final, maximally-similar sense alignment of the word types in t1 and t2. The resulting alignment produces the following sets of senses: Pt1 = {manager1 n, fire4 v, worker1 n} Pt2 = {employee1 n, terminate4 v, work3 n, boss2 n} where Px denotes the corresponding set of senses of sentence x. 2.3 Semantic Signature Similarity Cosine Similarity. In order to compare semantic signatures, we adopt the Cosine similarity measure as a baseline method. The measure is computed by treating each multinomial as a vector and then calculating the normalized dot product of the two signatures’ vectors. 3We follow Navigli (2009) and denote with wi p the i-th sense of w in WordNet with part of speech p. 1343 However, a semantic signature is, in essence, a weighted ranking of the importance of WordNet senses for each lexical item. Given that the WordNet graph has a non-uniform structure, and also given that different lexical items may be of different sizes, the magnitudes of the probabilities obtained may differ significantly between the two multinomial distributions. Therefore, for computing the similarity of two signatures, we also consider two nonparametric methods that use the ranking of the senses, rather than their probability values, in the multinomial. Weighted Overlap. Our first measure provides a nonparametric similarity by comparing the similarity of the rankings for intersection of the senses in both semantic signatures. However, we additionally weight the similarity such that differences in the highest ranks are penalized more than differences in lower ranks. We refer to this measure as the Weighted Overlap. Let S denote the intersection of all senses with non-zero probability in both signatures and rj i denote the rank of sense si ∈S in signature j, where rank 1 denotes the highest rank. The sum of the two ranks r1 i and r2 i for a sense is then inverted, which (1) weights higher ranks more and (2) when summed, provides the maximal value when a sense has the same rank in both signatures. The unnormalized weighted overlap is then calculated as P|S| i=1(r1 i + r2 i )−1. Then, to bound the similarity value in [0, 1], we normalize the sum by its maximum value, P|S| i=1(2i)−1, which occurs when each sense has the same rank in both signatures. Top-k Jaccard. Our second measure uses the ranking to identify the top-k senses in a signature, which are treated as the best representatives of the conceptual associates. We hypothesize that a specific rank ordering may be attributed to small differences in the multinomial probabilities, which can lower rank-based similarities when one of the compared orderings is perturbed due to slightly different probability values. Therefore, we consider the top-k senses as an unordered set, with equal importance in the signature. To compare two signatures, we compute the Jaccard Index of the two signatures’ sets: RJac(Uk, Vk) = |Uk ∩Vk| |Uk ∪Vk| (2) where Uk denotes the set of k senses with the highest probability in the semantic signature U. Dataset MSRvid MSRpar SMTeuroparl OnWN SMTnews Training 750 750 734 Test 750 750 459 750 399 Table 1: Statistics of the provided datasets for the SemEval-2012 Semantic Textual Similarity task. 3 Experiment 1: Textual Similarity Measuring semantic similarity of textual items has applications in a wide variety of NLP tasks. As our benchmark, we selected the recent SemEval2012 task on Semantic Textual Similarity (STS), which was concerned with measuring the semantic similarity of sentence pairs. The task received considerable interest by facilitating a meaningful comparison between approaches. 3.1 Experimental Setup Data. We follow the experimental setup used in the STS task (Agirre et al., 2012), which provided five test sets, two of which had accompanying training data sets for tuning system performance. Each sentence pair in the datasets was given a score from 0 to 5 (low to high similarity) by human judges, with a high inter-annotator agreement of around 0.90 when measured using the Pearson correlation coefficient. Table 1 lists the number of sentence pairs in training and test portions of each dataset. Comparison Systems. The top-ranking participating systems in the SemEval-2012 task were generally supervised systems utilizing a variety of lexical resources and similarity measurement techniques. We compare our results against the top three systems of the 88 submissions: TLsim and TLsyn, the two systems of ˇSari´c et al. (2012), and the UKP2 system (B¨ar et al., 2012). UKP2 utilizes extensive resources among which are a Distributional Thesaurus computed on 10M dependencyparsed English sentences. In addition, the system utilizes techniques such as Explicit Semantic Analysis (Gabrilovich and Markovitch, 2007) and makes use of resources such as Wiktionary and Wikipedia, a lexical substitution system based on supervised word sense disambiguation (Biemann, 2013), and a statistical machine translation system. The TLsim system uses the New York Times Annotated Corpus, Wikipedia, and Google Book Ngrams. The TLsyn system also uses Google Book Ngrams, as well as dependency parsing and named entity recognition. 1344 Ranking System Overall Dataset-specific ALL ALLnrm Mean ALL ALLnrm Mean Mpar Mvid SMTe OnWN SMTn 1 1 1 ADW 0.866 0.871 0.711 0.694 0.887 0.555 0.706 0.604 2 3 2 UKP2 0.824 0.858 0.677 0.683 0.873 0.528 0.664 0.493 3 4 6 TLsyn 0.814 0.857 0.660 0.698 0.862 0.361 0.704 0.468 4 2 3 TLsim 0.813 0.864 0.675 0.734 0.880 0.477 0.679 0.398 Table 2: Performance of our system (ADW) and the 3 top-ranking participating systems (out of 88) in the SemEval-2012 Semantic Textual Similarity task. Rightmost columns report the corresponding Pearson correlation r for individual datasets, i.e., MSRpar (Mpar), MSRvid (Mvid), SMTeuroparl (SMTe), OnWN (OnWN) and SMTnews (SMTn). We also provide scores according to the three official evaluation metrics (i.e., ALL, ALLnrm, and Mean). Rankings are also presented based on the three metrics. System Configuration. Here we describe the configuration of our approach, which we have called Align, Disambiguate and Walk (ADW). The STS task uses human similarity judgments on an ordinal scale from 0 to 5. Therefore, in ADW we adopted a similar approach to generating similarity values to that adopted by other participating systems, whereby a supervised system is trained to combine multiple similarity judgments to produce a final rating consistent with the human annotators. We utilized the WEKA toolkit (Hall et al., 2009) to train a Gaussian Processes regression model for each of the three training sets (cf. Table 1). The features discussed hereafter were considered in our regression model. Main features. We used the scores calculated using all three of our semantic signature comparison methods as individual features. Although the Jaccard comparison is parameterized, we avoided tuning and instead used four features for distinct values of k: 250, 500, 1000, and 2500. String-based features. Additionally, because the texts often contain named entities which are not present in WordNet, we incorporated the similarity values produced by four string-based measures, which were used by other teams in the STS task: (1) longest common substring which takes into account the length of the longest overlapping contiguous sequence of characters (substring) across two strings (Gusfield, 1997), (2) longest common subsequence which, instead, finds the longest overlapping subsequence of two strings (Allison and Dix, 1986), (3) Greedy String Tiling which allows reordering in strings (Wise, 1993), and (4) the character/word n-gram similarity proposed by Barr´on-Cede˜no et al. (2010). We followed ˇSari´c et al. (2012) and used the models trained on the SMTeuroparl and MSRpar datasets for testing on the SMTnews and OnWN test sets, respectively. 3.2 STS Results Three evaluation metrics are provided by the organizers of the SemEval-2012 STS task, all of which are based on Pearson correlation r of human judgments with system outputs: (1) the correlation value for the concatenation of all five datasets (ALL), (2) a correlation value obtained on a concatenation of the outputs, separately normalized by least square (ALLnrm), and (3) the weighted average of Pearson correlations across datasets (Mean). Table 2 shows the scores obtained by ADW for the three evaluation metrics, as well as the Pearson correlation values obtained on each of the five test sets (rightmost columns). We also show the results obtained by the three top-ranking participating systems (i.e., UKP2, TLsim, and TLsyn). The leftmost three columns show the system rankings according to the three metrics. As can be seen from Table 2, our system (ADW) outperforms all the 88 participating systems according to all the evaluation metrics. Our system shows a statistically significant improvement on the SMTnews dataset, with an increase in the Pearson correlation of over 0.10. MSRpar (MPar) is the only dataset in which TLsim (ˇSari´c et al., 2012) achieves a higher correlation with human judgments. Named entity features used by the TLsim system could be the reason for its better performance on the MSRpar dataset, which contains a large number of named entities. 3.3 Similarity Measure Analysis To gain more insight into the impact of our alignment-based disambiguation approach, we carried out a 10-fold cross-validation on the three training datasets (cf. Table 1) using the systems described hereafter. ADW-MF. To build this system, we utilized our main features only; i.e., we did not make use of additional string-based features. 1345 DW. Similarly to ADW-MF, this system utilized the main features only. In DW, however, we replaced our alignment-based disambiguation phase with a random walk-based WSD system that disambiguated the sentences separately, without performing any alignment. As our WSD system, we used UKB, a state-of-the-art knowledge-based WSD system that is based on the same topicsensitive PageRank algorithm used by our approach. UKB initializes the algorithm from all senses of the words in the context of a word to be disambiguated. It then picks the most relevant sense of the word according to the resulting probability vector. As the lexical knowledge base of UKB, we used the same semantic network as that utilized by our approach for calculating semantic signatures. Table 3 lists the performance values of the two above-mentioned systems on the three training sets in terms of Pearson correlation. In addition, we present in the table correlation scores for four other similarity measures reported by B¨ar et al. (2012): • Pairwise Word Similarity that comprises of a set of WordNet-based similarity measures proposed by Resnik (1995), Jiang and Conrath (1997), and Lin (1998b). The aggregation strategy proposed by Corley and Mihalcea (2005) has been utilized for extending these word-to-word similarity measures for calculating text-to-text similarities. • Explicit Semantic Analysis (Gabrilovich and Markovitch, 2007) where the highdimensional vectors are obtained on WordNet, Wikipedia and Wiktionary. • Distributional Thesaurus where a similarity score is computed similarly to that of Lin (1998a) using a distributional thesaurus obtained from a 10M dependency-parsed sentences of English newswire. • Character n-grams which were also used as one of our additional features. As can be seen from Table 3, our alignmentbased disambiguation approach (ADW-MF) is better suited to the task than a conventional WSD approach (DW). Another interesting point is the high scores achieved by the Character n-grams Similarity measure Dataset Mpar Mvid SMTe DW 0.448 0.820 0.660 ADW-MF 0.485 0.842 0.721 Explicit Semantic Analysis 0.427 0.781 0.619 Pairwise Word Similarity 0.564 0.835 0.527 Distributional Thesaurus 0.494 0.481 0.365 Character n-grams 0.658 0.771 0.554 Table 3: Performance of our main-feature system with conventional WSD (DW) and with the alignment-based disambiguation approach (ADWMF) vs. four other similarity measures, using 10fold cross validation on the training datasets MSRpar (Mpar), MSRvid (Mvid), and SMTeuroparl (SMTe). measure. This confirms that string-based methods are strong baselines for semantic textual similarity. Except for the MSRpar (Mpar) dataset, our system (ADW-MF) outperforms all other similarity measures. The scores obtained by Explicit Semantic Analysis and Distributional Thesaurus are not competitive on any dataset. On the other hand, Pairwise Word Similarity achieves a high performance on MSRpar and MSRvid datasets, but performs surprisingly low on the SMTeuroparl dataset. 4 Experiment 2: Word Similarity We now proceed from the sentence level to the word level. Word similarity has been a key problem for lexical semantics, with significant efforts being made by approaches in distributional semantics to accurately identify synonymous words (Turney and Pantel, 2010). Different evaluation methods exist in the literature for evaluating the performance of a word-level semantic similarity measure; we adopted two well-established benchmarks: synonym recognition and correlating word similarity judgments with those from human annotators. For synonym recognition, we used the TOEFL dataset created by Landauer and Dumais (1997). The dataset consists of 80 multiple-choice synonym questions from the TOEFL test; a word is paired with four options, one of which is a valid synonym. Test takers with English as a second language averaged 64.5% correct. Despite multiple approaches, only recently has the test been answered perfectly (Bullinaria and Levy, 2012), underscoring the challenge of synonym recognition. 1346 Approach Accuracy PPMIC (Bullinaria and Levy, 2007) 85.00% GLSA (Matveeva et al., 2005) 86.25% LSA (Rapp, 2003) 92.50% ADWJac 93.75±2.5% ADWWO 95.00% ADWCos 96.25% PR (Turney et al., 2003) 97.50% PCCP (Bullinaria and Levy, 2012) 100.00% Table 4: Accuracy on the 80-question TOEFL Synonym test. ADWJac, ADWWO, and ADWCos correspond to results with the Jaccard, Weighted Overlap and Cosine signature comparison measures, respectively. For the similarity judgment evaluation, we used as benchmark the RG-65 dataset created by Rubenstein and Goodenough (1965). The dataset contains 65 word pairs judged by 51 human subjects on a scale of 0 to 4 according to their semantic similarity. Ideally, a measure’s similarity judgments are expected to be highly correlated with those of humans. To be consistent with the previous literature (Hughes and Ramage, 2007; Agirre et al., 2009), we used Spearman’s rank correlation in our experiment. 4.1 Experimental Setup Our alignment-based sense disambiguation transforms the task of comparing individual words into that of calculating the similarity of the bestmatching sense pair across the two words. As there is no training data we do not optimize the k value for computing signature similarity with the Jaccard index; instead, we report, for the synonym recognition and the similarity judgment evaluations, the respective range of accuracies and the average correlation obtained upon using five values of k randomly selected in the range [50, 2500]: 678, 1412, 1692, 2358, 2387. 4.2 Word Similarity Results: TOEFL dataset Table 4 lists the accuracy performance of the system in comparison to the existing state of the art on the TOEFL test. ADWWO, ADWCos, and ADWJac correspond to our approach when Weighted Overlap, Cosine, and Jaccard signature comparison measures are used, respectively. Despite not being tuned for the task, our model achieves near-perfect performance, answering all but three questions correctly with the Cosine measure. Among the top-performing approaches, only Word Synonym choices (correct in bold) fanciful familiar apparent⋆imaginative† logical verbal oral† overt fitting verbose⋆ resolved settled⋆forgotten† publicized examined percentage volume sample proportion profit†⋆ figure list solve⋆ divide† express highlight alter† imitate accentuate⋆ restore Table 5: Questions answered incorrectly by our approach. Symbols † and ⋆correspond to the choices of our approach with the Weighted Overlap and Cosine signature comparisons respectively. We do not include the mistakes made when the Jaccard measure was used as they vary with the k value. that of Rapp (2003) uses word senses, an approach that is outperformed by our method. The errors produced by our system were largely the result of sense locality in the WordNet network. Table 5 highlights the incorrect responses. The synonym mistakes reveal cases where senses of the two words are close in WordNet, indicating some relatedness. For example, percentage may be interpreted colloquially as monetary value (e.g., “give me my percentage”) and elicits the synonym of profit in the economic domain, which ADW incorrectly selects as a synonym. 4.3 Word Similarity Results: RG-65 dataset Table 6 shows the Spearman’s ρ rank correlation coefficients with human judgments on the RG-65 dataset. As can be seen from the Table, our approach with the Weighted Overlap signature comparison improves over the similar approach of Hughes and Ramage (2007) which, however, does not involve the disambiguation step and considers a word as a whole unit as represented by the set of its senses. 5 Experiment 3: Sense Similarity WordNet is known to be a fine-grained sense inventory with many related word senses (Palmer et al., 2007). Accordingly, multiple approaches have attempted to identify highly similar senses in order to produce a coarse-grained sense inventory. We adopt this task as a way of evaluating our similarity measure at the sense level. 5.1 Coarse-graining Background Earlier work on reducing the polysemy of sense inventories has considered WordNet-based sense relatedness measures (Mihalcea and Moldovan, 2001) and corpus-based vector representations of 1347 Approach Correlation ADWCos 0.825 Agirre et al. (2009) 0.830 Hughes and Ramage (2007) 0.838 Zesch et al. (2008) 0.840 ADWJac 0.841 ADWWO 0.868 Table 6: Spearman’s ρ correlation coefficients with human judgments on the RG-65 dataset. ADWJac, ADWWO, and ADWCos correspond to results with the Jaccard, Weighted Overlap and Cosine signature comparison measures respectively. word senses (Agirre and Lopez, 2003; McCarthy, 2006). Navigli (2006) proposed an automatic approach for mapping WordNet senses to the coarsegrained sense distinctions of the Oxford Dictionary of English (ODE). The approach leverages semantic similarities in gloss definitions and the hierarchical relations between senses in the ODE to cluster WordNet senses. As current state of the art, Snow et al. (2007) developed a supervised SVM classifier that utilized, as its features, several earlier sense relatedness techniques such as those implemented in the WordNet::Similarity package (Pedersen et al., 2004). The classifier also made use of resources such as topic signatures data (Agirre and de Lacalle, 2004), the WordNet domain dataset (Magnini and Cavagli`a, 2000), and the mappings of WordNet senses to ODE senses produced by Navigli (2006). 5.2 Experimental Setup We benchmark the accuracy of our similarity measure in grouping word senses against those of Navigli (2006) and Snow et al. (2007) on two datasets of manually-labeled sense groupings of WordNet senses: (1) sense groupings provided as a part of the Senseval-2 English Lexical Sample WSD task (Kilgarriff, 2001) which includes nouns, verbs and adjectives; (2) sense groupings included in the OntoNotes project4 (Hovy et al., 2006) for nouns and verbs. Following the evaluation methodology of Snow et al. (2007), we combine the Senseval-2 and OntoNotes datasets into a third dataset. Snow et al. (2007) considered sense grouping as a binary classification task whereby for each word every possible pairing of senses has to be classified 4Sense groupings belong to a pre-version 1.0: http:// cemantix.org/download/sense/ontonotes-sense-groups.tar.gz Onto SE-2 Onto + SE-2 Method Noun Verb Noun Verb Adj Noun Verb RCos 0.406 0.522 0.450 0.465 0.484 0.441 0.485 RWO 0.421 0.544 0.483 0.482 0.531 0.470 0.503 RJac 0.418 0.531 0.478 0.473 0.501 0.465 0.493 SVM 0.370 0.455 NA NA 0.473 0.423 0.432 ODE 0.218 0.396 NA NA 0.371 0.331 0.288 Table 7: F-score sense merging evaluation on three hand-labeled datasets: OntoNotes (Onto), Senseval-2 (SE-2), and combined (Onto+SE-2). Results are reported for all three of our signature comparison measures and also for two previous works (last two rows). as either merged or not-merged. We constructed a simple threshold-based classifier to perform the same binary classification. To this end, we calculated the semantic similarity of each sense pair and then used a threshold value t to classify the pair as merged if similarity ≥t and not-merged otherwise. We sampled out 10% of the dataset for tuning the value of t, thus adapting our classifier to the fine granularity of the dataset. We used the same held-out instances to perform a tuning of the k value used for Jaccard index, over the same values of k as in Experiment 1 (cf. Section 3). 5.3 Sense Merging Results For a binary classification task, we can directly calculate precision, recall and F-score by constructing a contingency table. We show in Table 7 the F-score performance of our classifier as obtained by an averaged 10-fold cross-validation. Results are presented for all three of the measures of semantic signature comparison and for the three datasets: OntoNotes, Senseval-2, and the two combined. In addition, we show in Table 7 the F-score results provided by Snow et al. (2007) for their SVM-based system and for the mapping-based approach of Navigli (2006), denoted by ODE. Table 7 shows that our methodology yields improvements over previous work on both datasets and for all parts of speech, irrespective of the semantic signature comparison method used. Among the three methods, Weighted Overlap achieves the best performance, which demonstrates that our transformation of semantic signatures into ordered lists of concepts and calculating similarity by rank comparison has been helpful. 1348 6 Related Work Due to the wide applicability of semantic similarity, significant efforts have been made at different lexical levels. Early work on document-level similarity was driven by information retrieval. Vector space methods provided initial successes (Salton et al., 1975), but often suffer from data sparsity when using small documents, or when documents use different word types, as in the case of paraphrases. Later efforts such as LSI (Deerwester et al., 1990), PLSA (Hofmann, 2001) and Topic Models (Blei et al., 2003; Steyvers and Griffiths, 2007) overcame these sparsity issues using dimensionality reduction techniques or modeling the document using latent variables. However, such methods were still most suitable for comparing longer texts. Complementary approaches have been developed specifically for comparing shorter texts, such as those used in the SemEval-2012 STS task (Agirre et al., 2012). Most similar to our approach are the methods of Islam and Inkpen (2008) and Corley and Mihalcea (2005), who performed a word-to-word similarity alignment; however, they did not operate at the sense level. Ramage et al. (2009) used a similar semantic representation of short texts from random walks on WordNet, which was applied to paraphrase recognition and textual entailment. However, unlike our approach, their method does not perform sense disambiguation prior to building the representation and therefore potentially suffers from ambiguity. A significant amount of effort has also been put into measuring similarity at the word level, frequently by approaches that use distributional semantics (Turney and Pantel, 2010). These methods use contextual features to represent semantics at the word level, whereas our approach represents word semantics at the sense level. Most similar to our approach are those of Agirre et al. (2009) and Hughes and Ramage (2007), which represent word meaning as the multinomials produced from random walks on the WordNet graph. However, unlike our approach, neither of these disambiguates the two words being compared, which potentially conflates the meanings and lowers the similarity judgment. Measures of sense relatedness have frequently leveraged the structural properties of WordNet (e.g., path lengths) to compare senses. Budanitsky and Hirst (2006) provided a survey of such WordNet-based measures. The main drawback with these approaches lies in the WordNet structure itself, where frequently two semantically similar senses are distant in the WordNet hierarchy. Possible solutions include relying on widercoverage networks such as WikiNet (Nastase and Strube, 2013) or multilingual ones such as BabelNet (Navigli and Ponzetto, 2012b). Fewer works have focused on measuring the similarity – as opposed to relatedness – between senses. The topic signatures method of Agirre and Lopez (2003) represents each sense as a vector over corpusderived features in order to build comparable sense representations. However, topic signatures often produce lower quality representations due to sparsity in the local structure of WordNet, especially for rare senses. In contrast, the random walk used in our approach provides a denser, and thus more comparable, representation for all WordNet senses. 7 Conclusions This paper presents a unified approach for computing semantic similarity at multiple lexical levels, from word senses to texts. Our method leverages a common probabilistic representation at the sense level for all types of linguistic data. We demonstrate that our semantic representation achieves state-of-the-art performance in three experiments using semantic similarity at different lexical levels (i.e., sense, word, and text), surpassing the performance of previous similarity measures that are often specifically targeted for each level. In future work, we plan to explore the impact of the sense inventory-based network used in our semantic signatures. Specifically, we plan to investigate higher coverage inventories such as BabelNet (Navigli and Ponzetto, 2012a), which will handle texts with named entities and rare senses that are not in WordNet, and will also enable cross-lingual semantic similarity. Second, we plan to evaluate our method on larger units of text and formalize comparison methods between different lexical levels. Acknowledgments The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. We would like to thank Sameer S. Pradhan for providing us with an earlier version of the OntoNotes dataset. 1349 References Eneko Agirre and Oier Lopez de Lacalle. 2004. Publicly available topic signatures for all WordNet nominal senses. In Proceedings of LREC, pages 1123–1126, Lisbon, Portugal. Eneko Agirre and Oier Lopez. 2003. Clustering WordNet word senses. In Proceedings of RANLP, pages 121–130, Borovets, Bulgaria. Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of NAACL, pages 19–27, Boulder, Colorado. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor GonzalezAgirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of SemEval-2012, pages 385–393, Montreal, Canada. Lloyd Allison and Trevor I. Dix. 1986. A bit-string longestcommon-subsequence algorithm. Information Processing Letters, 23(6):305–310. Daniel B¨ar, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. UKP: Computing semantic textual similarity by combining multiple content similarity measures. In Proceedings of SemEval-2012, pages 435–440, Montreal, Canada. Alberto Barr´on-Cede˜no, Paolo Rosso, Eneko Agirre, and Gorka Labaka. 2010. Plagiarism detection across distant language pairs. In Proceedings of COLING, pages 37–45, Beijing, China. Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2012. Learning entailment relations by global graph structure optimization. Computational Linguistics, 38(1):73–111. Chris Biemann. 2013. Creating a system for lexical substitutions from scratch using crowdsourcing. Language Resources and Evaluation, 47(1):97–122. Or Biran, Samuel Brody, and No´emie Elhadad. 2011. Putting it simply: a context-aware approach to lexical simplification. In Proceedings of ACL, pages 496–501, Portland, Oregon. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. The Journal of Machine Learning Research, 3:993–1022. Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1):13–47. John A. Bullinaria and Joseph. P. Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, (3):510. John A. Bullinaria and Joseph P. Levy. 2012. Extracting semantic representations from word co-occurrence statistics: stop-lists, stemming, and SVD. Behavior Research Methods, 44:890–907. Courtney Corley and Rada Mihalcea. 2005. Measuring the semantic similarity of texts. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pages 13–18, Ann Arbor, Michigan. Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of American Society for Information Science, 41(6):391–407. Antonio Di Marco and Roberto Navigli. 2013. Clustering and diversifying Web search results with graph-based Word Sense Induction. Computational Linguistics, 39(3). Tamer Elsayed, Jimmy Lin, and Douglas W. Oard. 2008. Pairwise document similarity in large collections with MapReduce. In Proceedings of ACL-HLT, pages 265– 268, Columbus, Ohio. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. Hagen F¨urstenau and Mirella Lapata. 2012. Semi-supervised Semantic Role Labeling via structural alignment. Computational Linguistics, 38(1):135–171. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using Wikipedia-based explicit semantic analysis. In Proceedings of IJCAI, pages 1606– 1611, Hyderabad, India. Oren Glickman and Ido Dagan. 2003. Acquiring lexical paraphrases from a single corpus. In Proceedings of RANLP, pages 81–90, Borovets, Bulgaria. Dan Gusfield. 1997. Algorithms on strings, trees, and sequences: computer science and computational biology. Cambridge University Press. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of KDD, pages 1406– 1414, Beijing, China. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10–18. Taher H. Haveliwala. 2002. Topic-sensitive PageRank. In Proceedings of WWW, pages 517–526, Hawaii, USA. Angelos Hliaoutakis, Giannis Varelas, Epimenidis Voutsakis, Euripides GM Petrakis, and Evangelos Milios. 2006. Information retrieval by semantic similarity. International Journal on Semantic Web and Information Systems, 2(3):55–73. Thomas Hofmann. 2001. Unsupervised Learning by Probabilistic Latent Semantic Analysis. Machine Learning, 42(1):177–196. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of NAACL, pages 57–60, NY, USA. Thad Hughes and Daniel Ramage. 2007. Lexical semantic relatedness with random graph walks. In Proceedings of EMNLP-CoNLL, pages 581–589, Prague, Czech Republic. Aminul Islam and Diana Inkpen. 2008. Semantic text similarity using corpus-based word similarity and string similarity. ACM Transactions on Knowledge Discovery from Data, 2(2):10:1–10:25. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of ROCLING X, pages 19–30, Taiwan. 1350 Adam Kilgarriff. 2001. English lexical sample task description. In Proceedings of Senseval, pages 17–20, Toulouse, France. Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review; Psychological Review, 104(2):211. Dekang Lin. 1998a. Automatic retrieval and clustering of similar words. In Proceedings of COLING, pages 768– 774, Montreal, Quebec, Canada. Dekang Lin. 1998b. An information-theoretic definition of similarity. In Proceedings of ICML, pages 296–304, San Francisco, CA. Bernardo Magnini and Gabriela Cavagli`a. 2000. Integrating subject field codes into WordNet. In Proceedings of LREC, pages 1413–1418, Athens, Greece. Ana G. Maguitman, Filippo Menczer, Heather Roinestad, and Alessandro Vespignani. 2005. Algorithmic detection of semantic similarity. In Proceedings of WWW, pages 107– 116, Chiba, Japan. Irina Matveeva, Gina-Anne Levow, Ayman Farahat, and Christiaan Royer. 2005. Terms representation with generalized latent semantic analysis. In Proceedings of RANLP, Borovets, Bulgaria. Diana McCarthy and Roberto Navigli. 2009. The English lexical substitution task. Language Resources and Evaluation, 43(2):139–159. Diana McCarthy. 2006. Relating WordNet senses for word sense disambiguation. In Proceedings of the Workshop on Making Sense of Sense at EACL-06, pages 17–24, Trento, Italy. Rada Mihalcea and Dan Moldovan. 2001. Automatic generation of a coarse grained WordNet. In Proceedings of NAACL Workshop on WordNet and Other Lexical Resources, Pittsburgh, USA. Vivi Nastase and Michael Strube. 2013. Transforming Wikipedia into a large scale multilingual concept network. Artificial Intelligence, 194:62–85. Roberto Navigli and Simone Paolo Ponzetto. 2012a. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217–250. Roberto Navigli and Simone Paolo Ponzetto. 2012b. BabelRelate! a joint multilingual approach to computing semantic relatedness. In Proceedings of AAAI, pages 108–114, Toronto, Canada. Roberto Navigli. 2006. Meaningful clustering of senses helps boost Word Sense Disambiguation performance. In Proceedings of COLING-ACL, pages 105–112, Sydney, Australia. Roberto Navigli. 2009. Word Sense Disambiguation: A survey. ACM Computing Surveys, 41(2):1–69. Martha Palmer, Hoa Dang, and Christiane Fellbaum. 2007. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering, 13(2):137–163. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. WordNet::Similarity - measuring the relatedness of concepts. In Proceedings of AAAI, pages 144–152, San Jose, CA. Mohammad Taher Pilehvar and Roberto Navigli. 2013. Paving the way to a large-scale pseudosense-annotated dataset. In Proceedings of NAACL-HLT, pages 1100– 1109, Atlanta, USA. Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of WWW, pages 337–346, Hyderabad, India. Daniel Ramage, Anna N. Rafferty, and Christopher D. Manning. 2009. Random walks for text semantic similarity. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing, pages 23–31, Suntec, Singapore. Reinhard Rapp. 2003. Word sense discovery based on sense descriptor dissimilarity. In Proceedings of the Ninth Machine Translation Summit, pages 315–322, New Orleans, LA. Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of IJCAI, pages 448–453, Montreal, Canada. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Gerard Salton, A. Wong, and C. S. Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620. Rion Snow, Sushant Prakash, Daniel Jurafsky, and Andrew Y. Ng. 2007. Learning to merge word senses. In EMNLPCoNLL, pages 1005–1014, Prague, Czech Republic. Mark Steyvers and Tom Griffiths. 2007. Probabilistic topic models. Handbook of Latent Semantic Analysis, 427(7):424–440. Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to non-factoid questions from Web collections. Computational Linguistics, 37(2):351–383. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Peter D. Turney, Michael L. Littman, Jeffrey Bigham, and Victor Shnayder. 2003. Combining independent modules to solve multiple-choice synonym and analogy problems. In Proceedings of RANLP, pages 482–489, Borovets, Bulgaria. Frane ˇSari´c, Goran Glavaˇs, Mladen Karan, Jan ˇSnajder, and Bojana Dalbelo Baˇsi´c. 2012. Takelab: Systems for measuring semantic text similarity. In Proceedings of SemEval-2012, pages 441–448, Montreal, Canada. Michael J. Wise. 1993. String similarity via greedy string tiling and running Karp-Rabin matching. In Department of Computer Science Technical Report, Sydney. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation rivaling supervised methods. In Proceedings of ACL, pages 189–196, Cambridge, Massachusetts. Torsten Zesch, Christof M¨uller, and Iryna Gurevych. 2008. Using Wiktionary for computing semantic relatedness. In Proceedings of AAAI, pages 861–866, Chicago, Illinois. 1351
2013
132
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1352–1362, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Linking and Extending an Open Multilingual Wordnet Francis Bond Linguistics and Multilingual Studies Nanyang Technological University [email protected] Ryan Foster Great Achievement Press [email protected] Abstract We create an open multilingual wordnet with large wordnets for over 26 languages and smaller ones for 57 languages. It is made by combining wordnets with open licences, data from Wiktionary and the Unicode Common Locale Data Repository. Overall there are over 2 million senses for over 100 thousand concepts, linking over 1.4 million words in hundreds of languages. 1 Introduction We wish to create a lexicon covering as many languages as possible, with as much useful information as possible. Generally, language resources, to be useful, must be both accessible (legal to use) and usable (of sufficient quality, size and with a documented interface) (Ishida, 2006). We address both of these concerns in this paper. One of the many attractions of the semantic network WordNet (Fellbaum, 1998), is that there are numerous wordnets being built for different languages. There are, in addition, many projects for groups of languages: Euro WordNet (Vossen, 1998), BalkaNet (Tufis¸ et al., 2004), Asian Wordnet (Charoenporn et al., 2008) and more. Although there are over 60 languages for which wordnets exist in some state of development (Fellbaum and Vossen, 2012, 316), less than half of these have released any data, and for those that have, the data is often not freely accessible (Bond and Paik, 2012). For those wordnets that are available, they are of widely varying size and quality, both in terms of accuracy and richness. Further, there is very little standardization in terms of format, what information is included, or license. The goal of the research outlined in this paper is to make it possible for a researcher interested in working on the lexical semantics of a language or languages to be able to access wordnets for those languages with a minimum of legal and technical barriers. In practice this means making it possible to access multiple wordnets with a common interface. We also use sources of semi-structured data that have minimal legal restrictions to automatically extend existing freely available wordnets and to create additional wordnets which can be added to our open wordnet grid. Previous studies have leveraged multiple wordnets and Wiktionary (Wikimedia, 2013) to extend existing wordnets or create new ones (de Melo and Weikum, 2009; Hanoka and Sagot, 2012). These studies passed over the valuable sense groupings of translations within Wiktionary and merely used Wiktionary as a source of translations that were not disambiguated according to sense. The present study built and extended wordnets by directly linking Wiktionary senses to WordNet senses. Meyer and Gurevych (2011) demonstrated the ability to automatically identify many matching senses in Wiktionary and WordNet based on the similarity of monolingual features. Our study combines monolingual features with the disambiguating power of multiple languages. In addition to differences in linking methodology, our project gives special attention to ensuring the maximum re-usability and accessibility of the data and software released. Other large scale multilingual lexicons have been made by linking wordnet to Wikipedia (Wikipedia, 2013; de Melo and Weikum, 2010; Navigli and Ponzetto, 2012). Our approach is complementary to these: in general Wikipedia has more entities than classes, while Wiktionary has more classes. In Section 2 we discuss linking freely available wordnets to form a single multilingual semantic network. In Section 3 we extend the wordnets with data from two sources. We show the results in Section 4 and then discuss them and outline future 1352 work in Section 5. 2 Linking Multiple Wordnets In order to make the data from existing wordnet projects more accessible, we have built a simple database with information from those wordnets with licenses that allow redistribution of the data. These wordnets, their licenses and recent activity are summarized in Table 1 (sizes for most of them are shown in Table 2).1 Wordnet Project Lng Licence Type Albaneto als CC BY a Arabic WordNet arb CC BY-SA s DanNet dan wordnet a Princeton WordNetu eng wordnet a Persian Wordnet fas free to use u FinnWordNetu fin CC BY a WOLFu fra CeCILL-C s Hebrew Wordneto heb wordnet s MultiWordNeto ita CC BY a Japanese Wordnetu jpn wordnet a Multilingual cat CC BY a Central eus CC BY-NC-SA n Repositoryo,u glg CC BY a spa CC BY a Wordnet Bahasau ind MIT a zsm MIT a Norwegian Wordneto nno wordnet a nob wordnet a plWordNeto,u pol wordnet a OpenWN-PTu por CC BY-SA s Thai Wordnet tha wordnet a o Re-released under an open license in 2012 u Updated in 2012 Type: u Unrestricted; a Attribution; s Share-alike; n Non-commercial URL: http://casta-net.jp/~kuribayashi/multi/ Table 1: Linked Open Wordnets The first wordnet developed is the Princeton WordNet (PWN: Fellbaum, 1998). It is a large lexical database of English. Open class words (nouns, verbs, adjectives and adverbs) are grouped into concepts represented by sets of synonyms (synsets). Synsets are linked by semantic relations such as hyponomy and meronomy. PWN is released under an open license (allowing one to use, copy, modify and distribute it so long as you properly acknowledge the copyright). The majority of freely available wordnets take the basic structure of the PWN and add new lemmas (words) to the existing synsets: the extend model (Vossen, 2005). For example, dogn:1 is linked to the lemmas chien in French, anjing in Malay, and so on. It is widely realized that this 1We have now added Mandarin Chinese. model is imperfect as different languages lexicalize different concepts and link them in different ways (Fellbaum and Vossen, 2012). Nevertheless, many projects have found that the overall structure of PWN serves as a useful scaffold. The fact that, for example, a dogn:1 is an animaln:1 is language independent. In theory, such wordnets can easily be combined into a single resource by using the PWN synsets as pivots. All languages are linked through the English wordnet. Because they are linked at the synset level, the problem of ambiguity one gets when linking bilingual dictionaries through a common language is resolved: we are linking senses to senses. In practice, linking a new language’s wordnet into the grid could be problematic for three reasons. The first problem was that the wordnets were linked to various versions of the Princeton WordNet. In order to combine them into a single multilingual structure, we had to map to a common version. The second problem was the incredible variety of formats that the wordnets are distributed in. Almost every project uses a different format. Even different versions of the same project often had slightly different formats. The final problem was legal: not all wordnets have been released under licenses that allow reuse. The first problem can largely be overcome using the mapping scripts from Daude et al. (2003). Mapping introduces some distortions, in particular, when a synset is split, we chose to only map the translations to the most probable mapping, so some new synsets will have no translations. The second problem we are currently solving through brute force, writing a new script for every new project we add. We make these scripts, along with the reformatted wordnets, freely available for download. Any problems or bugs found when converting the wordnets have been reported back to the original projects, with many of them fixed in newer releases. We consider this feedback to be an important part of our work: it means that other researchers and users do not have to suffer from the same problems and it encourages projects to release updates. The third, legal, problem is being solved by an ongoing campaign to encourage projects to (re-)release their data under open licenses. Since Bond and Paik (2012) surveyed wordnet licenses in 2011, six projects have newly released data un1353 der open licenses and eight projects have updated their data. Our combined wordnet includes English (Fellbaum, 1998); Albanian (Ruci, 2008); Arabic (Black et al., 2006); Chinese (Huang et al., 2010); Danish (Pedersen et al., 2009); Finnish (Lind´en and Carlson., 2010); French (Sagot and Fiˇser, 2008); Hebrew (Ordan and Wintner, 2007); Indonesian and Malaysian (Nurril Hirfana et al., 2011); Italian (Pianta et al., 2002); Japanese (Isahara et al., 2008); Norwegian (Bokm˚al and Nynorsk: Lars Nygaard 2012, p.c.); Persian (Montazery and Faili, 2010); Portuguese (de Paiva and Rademaker, 2012); Polish (Piasecki et al., 2009); Thai (Thoongsup et al., 2009) and Basque, Catalan, Galician and Spanish from the Multilingual Common Repository (Gonzalez-Agirre et al., 2012). On our server, the wordnets are all in a shared sqlite database using the schema produced by the Japanese WordNet project (Isahara et al., 2008). The database is based on the logical structure of the Princeton WordNet, with an additional language attribute for lemmas, examples, definitions and senses. It is a single open multilingual resource. When we redistribute the data, each project’s data is made available separately, with a common format, but separate licenses. The Scandinavian and Polish wordnets are based on the merge approach, where independent language specific structures are built and then some synsets linked to PWN. Typically only a small subset will be linked (due more to resource limitations than semantic incompatibility). 2.1 Core Concepts Boyd-Graber et al. (2006) created a list of 5,000 core word senses in Princeton WordNet which represent approximately the 5,000 most frequently used word senses.2 We use this list to evaluate the coverage of the wordnets: do they contain words for the most common concepts? As a very rough measure of useful coverage, we report the percentage of synsets covered from this core list. Because the list is based on English data, it is of course not a perfect measure for other languages and cultures. Note that some wordnet projects have deliberately targeted the core concepts, which of course boosts their coverage scores. 2The original list is here from http://wordnetcode. princeton.edu/standoff-files/core-wordnet.txt; we converted it to wn30 synsets. 2.2 License Types The licenses fall into four broad categories: (u) completely unrestricted, (a) attribution required, (s) share alike, and (n) non-commercial. The first category includes any work that is in the public domain or that the author has released without any restrictions. The second category allows anyone to use, adapt, improve, and redistribute the work as long as one attributes the work in the manner specified by the copyright holder (without suggesting an endorsement). The WordNet, MIT, and CC BY licenses are all in this category. The third category allows anyone to adapt and improve the licensed work and redistribute it, but the redistributed work must be released under the same license. The CC BY-SA, GPL, GFDL, and CeCILL-C licenses are of this type. Because derivative works can only be redistributed under the same license, works licensed under any two of these licenses cannot be combined with each other and legally redistributed. In general, a work formed from the combination of works in category (u) and (a) with a work in category (s) will be subject to the more restrictive terms of the the share alike license. However, the GPL, GFDL and CeCILL-C are incompatible with CC BY.3 The fourth type of license further forbids the commercial use of a work. The CC BY-NC and the CC BY-NC-SA licenses are in this category, they are also incompatible with licenses in category (s). Releasing a work under the more restrictive licenses in categories (s) and (n) above substantially limit and complicate the ability to extend and combine a work into other useful forms. By maintaining a separation of databases released under incompatible licenses, we avoid any possible legal problems. Due to license incompatibilities, it is impossible to release a single database with all the wordnets, even though individually they are redistributable. We can currently combine those with licenses in groups (u) and (a) and the CC BYSA wordnets (now everything except French and Basque). 3 Extending with non-wordnet data We looked at two sources for automatically adding new entries. The Unicode Common Locale Data Repository (CLDR) has reliable information on languages, territories and dates. Wiktionary is a 3http://www.gnu.org/licenses/license-list. html\#ccby 1354 general purpose lexicon with much more information for many words. 3.1 Unicode Common Locale Data Repository (CLDR) We added information on languages, territories and dates from the Unicode Common Locale Data Repository (CLDR).4 This is a collection of data maintained by the Unicode Consortium to support software internationalization and localization with locale information on formatting dates, numbers, currencies, times, and time zones, as well as help for choosing languages and countries by name. It has this data for over 194 languages. It is released under an open license that allows redistribution with proper attribution (Unicode, Inc., 2012).5 We found data for 122 languages. Most had around 550 senses (synsets and their lemmas): for example, for Portuguese: Englishn:1 inglˆes. Some had only 40 or 50, such as Assamese, which only has the week days, month names and a few language names. The linked data was small enough to check by hand. When the original CLDR data is correct the data we generate should be correct. The idea of using such data is not new. Quah et al. (2001) for example, use Linux locale data to extend a proprietary English-Malay lexicon. de Melo and Weikum (2009) also use this data (and data from a variety of other sources) to build an enhanced wordnet, in addition adding new synsets for concepts that are not in not wordnet. However, when they released the data as LEXVO (data about languages: CC BY-SA) and UWN (the universal multilingual wordnet: CC BY-NC-SA), they added additional license restrictions which complicate the reuse of the data and make it impossible to integrate the data back into the original wordnet project. 3.2 Wiktionary Searches for a publicly-available source of Wiktionary in a preprocessed, machine-readable format did not turn up any sources that were recent and publicly-available.6 Although there are sev4http://cldr.unicode.org/ 5With the extra requirement that “there is clear notice in each modified Data File or in the Software as well as in the documentation associated with the Data File(s) or Software that the data or software has been modified.” 6We later learned that McCrae et al. (2012) made a release of Wiktionary in the lemon format (http://datahub.io/ en/dataset/dbnary). They did not, however, release the code they used to parse Wiktionary. eral freely-available software programs that are capable of parsing portions of the English Wiktionary, none of the programs that were evaluated appeared to extract the precise set of information desired for our task in an easy-to-use format. So the authors decided to build a custom parser capable of extracting the information needed for building open wordnets. 3.2.1 Wiktionary Parser Since each language edition of Wiktionary is formatted in a somewhat unique way, parsers must be tailored to recognize the structure and formatting of each edition on a case-by-case basis. The authors created a parser tailored to the English Wiktionary, although it can be extended to handle other language versions as well. We are releasing this code under the MIT license.7 The current version of the parser is capable of extracting headwords, parts of speech, definitions, synonyms and translations from the XML Wiktionary database dumps provided by the Wikimedia Foundation.8 Within these large XML files, the main body of Wiktionary articles are stored in a Wikitext format, which is a semi-structured format. Although anyone can edit a Wiktionary page and use any style of formatting they desire, the community of users encourages adhering to established guidelines, which produces a format that is generally predictable. Within the English Wiktionary, synonyms and translations are both grouped into sense groups that correspond with definitions in the main section. These sense groups are marked by a short text gloss (short gloss), which is usually an abbreviated version of one of the full definitions (full definition). The parser makes no attempt to match these short glosses with the full definitions. Data is simply extracted, cleaned, and then stored in a relational database or flat file. Translations proved to be easy to extract due to the fairly consistent use of a specifically formatted translation template. These templates include a language code derived from ISO standards, the translation, and optional additional information such as gender, transliteration, script, and alternate forms. The parser extracts and retains all of this potentially valuable information. Examples of translation templates: 7Available from the Open Multilingual Wordnet Page: http://casta-net.jp/~kuribayashi/multi/. 8http://dumps.wikimedia.org/ 1355 • Finnish: {{t+|fi|sanakirja}} • French: {{t+|fr|dictionnaire|m}} To enable later processing, it is necessary to tie synonyms and translations to their corresponding short gloss via a unique key. Most parsers simply use an automatically generated surrogate key or a key based on the ordered position of data within a Wiktionary article. Since Wiktionary is constantly changing, the side effect of this approach is that data extracted from a specific snapshot of the Wiktionary database can only be meaningfully used in connection with other data extracted by the same parser from the exact same snapshot. To overcome this, we use a unique key that can be recreated from the data itself, which we call the defkey. To generate this key, we concatenate the language code, headword, part of speech, and the short gloss and use the sha1 hash function (NIST, 2012) to create a unique 40-character hexadecimal string from the resulting text. These defkeys are time and technology independent, so they allow the ability for researchers to efficiently share and compare results. Once a link is established between this defkey and a particular synset, translations added to Wiktionary at a later data can be automatically integrated into our multilingual wordnet. Conversely, if a Wiktionary contributor changes a short gloss, historical data connected to the old defkey is preserved while new data imported at a later time will not be incorrectly linked to an older definition. Another feature of our parser is a feedback mode, which generates a report about poorly formated data that was encountered. These automatically generated reports can be used to create a quality-enhancing feedback loop with Wiktionary. 3.2.2 Linking Senses Meyer and Gurevych (2011) showed that automatic alignments between Wiktionary senses and PWN can be established with reasonable accuracy and recall by combining multiple text similarity scores to compare a bag of words based on several pieces of information linked to a WordNet sense with another bag of words obtained from a Wiktionary entry. In our study we evaluated the potential for aligning senses based on common translations in combination with monolingual similarity features. In this study we used 20 of the wordnets described in Section 2,9 and the Wiktionary data obtained using the parser described in Section 3.2.1. Before searching for translation matches, we normalized the data to ensure the most accurate possible overlap count. First, article headwords were included as English translations of Wiktionary senses (along with synonyms). Then differences in language codes were rectified and translations containing symbolic characters or a mixture of roman and non-roman characters were marked to be ignored, save a few exceptions. This left approximately 1.4 million sense translations in 20 languages in our wordnet grid, and nearly 1.3 million Wiktionary translations in over 1,000 languages. We then created a list of all possible alignments where at least one translation of a wordnet sense matched a translation of a Wiktionary sense. This represented a small percentage of the possible alignments, because definitions in Wiktionary that do not contain any translations were ignored in our study. Of more than 500,000 English definitions in Wiktionary, only about 130,000 presently have associated translations. The resulting graph contained over 700,000 possible sense alignments. We calculated a number of similarity scores, the first two based on similarity in the number of lemmas, calculated using the Jaccard index: sime(sn,sk) = |E(sk)∩E(sn)| |E(sk)∪E(sn)| (1) sima(sn,sk) = |L(sk)∩L(sn)| |L(sk)∪L(sn)| (2) Where sk,sn are concepts in Wiktionary and wordnet respectively,10 E(s) is the set of English lemmas for sense s and L(s) is the set of lemmas in all languages. As an initial pruning, we kept only matches where either: sima ≥0.7 or (sime ≥0.5 and sima ≥0.5) or, if (|L(sk) ∩L(sn)| > 5) then (sime ≥0.5 and sima ≥0.45). After applying these filters, approximately 220,000 alignment candidates remained. We reviewed a random sample of 551 alignment candidates. Of these 136 were deemed correctly aligned. Another 48 we considered possibly close enough to produce valid translations for wordnet. All others were marked as incorrect alignments. 9We didn’t use Chinese or Polish, as the wordnets were added after we had started the evaluation. 10Precisely, synsets in wordnet and senses in Wiktionary. 1356 This development dataset was used to tune refined similarity scores. simt(sn,sk) = |L(sk)∩L(sn)| p α|L(sk)∪L(sn)| (3) simd(sn,sk) = BoW(wndef)·BoW(wkdef) ∥BoW(wndef)∥∥BoW(wkdef)∥(4) simc(sn,sk) = simt +β simc (5) simt gives higher weight to concepts that link through more lemmas, not just a higher proportion of lemmas. simd measures the similarity of the definitions in the two resources, using a cosine similarity score. We initially used the WordNet gloss and example sentence(s) for wndef and the short gloss from Wiktionary for wkdef. This improved the accuracy of the combined ranking score (simc), but since many of the short glosses are only one or two words, the sparse input often produced a simd score of zero even when the candidate alignment was correct. To improve the accuracy of the simd component, we also added in the long definitions. Short glosses were aligned with long definitions using a similar approach to McCrae et al. (2012). First we search for a match where the short gloss was a substring of the full definition. If that failed to produce a single possible alignment, we aligned the short gloss with the full definition that produced the greatest cosine similarity score. Finally, where the short definition was blank and only a long definition was present, we aligned the two. The results of this alignment were less than 90% accurate, so to offset the effects of this noise we included both the full definition and the short gloss in wkdef. For wndef we used the WordNet gloss, example sentence(s), and synonyms. Even though the linking of definitions within Wiktionary left much to be desired, the increased amount of text improved the accuracy of the definition based similarity component of our ranking score. Our combined ranking score (simc), based on both overlapping translations and a monolingual lexical similarity score, was able to outperform ranking based on either component in isolation. We expect that an improved alignment of short glosses to full definitions together with more accurate measures of lexical similarity such as described by Meyer and Gurevych (2011) would further improve the accuracy of a combined ranking score. We employed our combined ranking score first as a filter, where simc ≥τc. The ranking score is then used to select the best match among competing alignments. Alignments are based on the belief that a definition within Wiktionary should only map to a single WordNet synset (if any at all). In theory, each WordNet synset should represent a meaning distinguishable from all other synsets. Because Wiktionary is organized according to lemma first, and sense second, multiple definitions in separate articles often map to the same synset. For example mortal “A human; someone susceptible to death”, individual “A person considered alone . . . ”, and person “A single human being; an individual” all align with someonen:1 (00007846-n). However, two distinct definitions within the same Wiktionary entry should not map to the same WordNet sense. When there are multiple possible alignments where only one can be valid, simc is used to determine the best match. In addition to using the combined ranking score as a filter, we found that we could obtain a small additional increase in accuracy without reducing recall by also requiring simt ≥τt or simd ≥τd. To determine ideal values for the weights and thresholds, we performed several grid searches. The parameters are interdependent and can produce reasonable results at a variety of points. Ideal values also depend on whether we wish to maximize accuracy or recall. α is set at 3.2 in order to achieve an ideal target threshold of τt = 1. We finally chose values of β = 0.7 and τc = 0.71 which gave a reasonable balance between accuracy and recall. 4 Results and Evaluation We give the data for the 26 wordnets with more than 10,000 synsets in Table 2. There are a further 57 with more than 1,000; 133 with more than 100, 200 with more than 10 and 645 with more than 1 (although most of the very small languages appear to be simple errors in the language code entered into Wiktionary). Individual totals are shown for synsets and senses from the original wordnets, the data extracted from Wiktionary, and the merged data of the wordnets, Wiktionary and CLDR. We do not show the CLDR data in the table as it is so small, generally 500-600 synsets for the top languages. Overall there are 2,040,805 senses for 117,659 concepts, using over 1,400,000 words in over 1,000 languages. The smaller wordnets are not of much practical use, but can still serve as the core of new 1357 Projects Wiktionary Merged (+CLDR) ISO Language Synsets Senses Core Synsets Senses Core Synsets Senses Core eng English 117,659 206,978 100 35,400 49,951 75 117,661 213,538 100 fin Finnish 116,763 189,227 100 21,516 31,154 65 116,830 199,435 100 tha Thai 73,350 95,517 81 2,560 3,193 17 73,595 97,390 81 fra French 59,091 102,671 92 20,449 27,150 63 61,258 109,643 95 jpn Japanese 57,179 158,064 95 12,685 19,479 52 59,112 166,617 96 ind Indonesian 52,006 142,488 99 2,390 2,810 17 52,154 143,755 99 cat Catalan 45,826 70,622 81 8,626 10,251 36 48,007 74,806 84 spa Spanish 38,512 57,764 76 18,281 25,310 60 47,737 74,848 86 por Portuguese 41,810 68,285 79 12,331 16,178 53 43,870 74,151 84 zsm Standard Malay 42,766 119,152 99 2,833 3,744 19 43,079 120,686 99 ita Italian 34,728 60,561 83 14,605 18,710 53 38,938 68,827 87 eus Basque 29,413 48,934 71 1,693 1,943 11 29,965 49,945 72 pol Polish 14,008 21,001 30 10,888 13,431 46 20,975 30,943 55 glg Galician 19,312 27,138 36 2,492 2,871 15 20,772 29,136 42 fas Persian 17,759 30,461 41 4,229 5,443 26 20,766 35,318 55 rus Russian 0 0 0 19,983 33,716 64 20,138 34,009 64 deu German 0 0 0 19,675 29,616 64 19,857 29,884 64 cmn Mandarin Chinese 4,913 8,069 28 12,130 19,079 49 15,490 27,113 60 arb Standard Arabic 10,165 21,751 48 6,892 9,337 38 14,861 31,337 63 nld Dutch 0 0 0 13,741 19,709 56 13,950 20,003 56 ces Czech 0 0 0 12,802 15,493 54 13,030 15,813 54 swe Swedish 0 0 0 12,000 16,226 51 12,221 16,512 51 ell Modern Greek 0 0 0 10,308 13,071 44 10,549 13,472 44 dan Danish 4,476 5,859 81 7,290 8,931 35 10,328 13,551 85 nob Norwegian Bokm˚al 4,455 5,586 79 7,262 9,170 35 10,322 13,612 83 hun Hungarian 0 0 0 9,964 12,699 45 10,213 13,029 45 Core shows the percentage coverage of the 5,000 core concepts. Table 2: Merged Wordnets (with more than 10,000 entries) projects. The bigger wordnets show the data from Wiktionary (and to a lesser extent CLDR) having only a small increase in the number of senses. The biggest change is for the medium size projects, such as Persian or Arabic, which end up with much better coverage of the most frequent core concepts. Major languages such as German or Russian, which currently do not have open wordnets get good coverage as well. The size of the mapping table is the same as the number of English senses linked (49,951 senses). We evaluated a random sample of 160 alignments and found the accuracy to be 90% (Wiktionary sense maps to the best possible wordnet sense). We then evaluated samples of the wordnet created from Wiktionary for several languages. For each language we choose 100 random senses, then checked them against existing wordnets.11 For all unmatched entries, we then had them checked by native speakers. The results are given in Table 3. The sense accuracy is higher than the mapping accuracy: in general, entries with more translations are linked more accurately, thus raising the average precision. During the extraction and eval11For Chinese we use the wordnet from Xu et al. (2008), which is free for research but cannot be redistributed. For German we used Euro WordNet (Vossen, 1998). Language % Matched % Good Chinese∗ 46 97 Serbo-Croation∗,∗∗ 0 91 Czech∗ 0 99 English 89 92 German∗ 19 85 Indonesian 69 97 Korean∗ 0 96 Japanese 56 90 Russian∗ 0 99 Average 94.0 Table 3: Precision of Wiktionary-based Wordnets ∗Not used to build the mapping from wordnet to Wiktionary. ∗∗We allow terms used in either Serbian or Croatian. uation, we noticed several language specific features: for example, Serbo-Croatian had a mixture of Cyrillic and Latin entries. For languages where one script was clearly dominant, we kept only that, but really these decisions should be done for each language by a native speaker. We make the data available in two ways. The first is a set of downloads. Each language has up to three files: the data from the wordnet project (if it exists), the data from the CLDR and the data from Wiktionary. They are kept separate in order 1358 to keep the licenses as free as possible. The second is as two on-line searches: one using only the data from the projects, and one with all the data combined. The combination is done by simple union.12 We maintain this separation as we cannot guarantee the quality of the automatically extracted data. Because the raw data is there it is possible to combine them in other ways. The simple structure is easy to manipulate, and there is code to use this style of data with the popular tool kit NLTK (Bird et al., 2010). 5 Discussion and Future Work We have created a large open wordnet of high quality (85%–99% measured on senses). Twenty six languages have more than 10,000 concepts covered, with 42–100% coverage of the most common core concepts. The data is easily downloadable with minimal restrictions. The overall accuracy is estimated at over 94%, as most of the original wordnets are hand verified (and so should be 100% accurate). The high accuracy is largely thanks to the disambiguating power of the multiple translations, made possible by the many open wordnets we have access to. Because we link senses between wordnet and Wiktionary and then use the translations of the sense, manually validating this mapping will improve the entries in multiple languages simultaneously. As the Wiktionary-wordnet alignment mapping is linked to persistent keys it will remain useful even as the resources change. Further, it can be used to identify and add missing senses to wordnet: unmapped Wiktionary entries are candidates for new concepts. The Universal Wordnet (UWN: de Melo and Weikum, 2009) brings in data from even more resources, and combines them to make a larger resource, choosing parameters with slightly lower precision (just under 90%). It is further linked to Wikipedia, adding many named entities. We expect that our work is complementary. Because we use a different approach, it would be possible to merge the two if the licenses allowed us to. However, since the CC BY-SA and CC-BYNC-SA licenses are mutually exclusive, the two works cannot be combined and rereleased unless relevant parties can relicense the works. There is no easy way to improve UWN beyond checking each and every entry, which is expensive. An ad12http://casta-net.jp/~kuribayashi/multi/ vantage of our approach, noted above, is that we can validate the sense matches for English and the accuracy percolates down to all the languages. Integrating data from the most recent version of Wiktionary can be done simply and takes a few hours. It is therefore feasible to update the downloadable data regularly. Improvements in either the wordnet projects or Wiktionary (or both) can also result in improved mappings. We further hope to take advantage of ongoing initiatives in the global wordnet grid to add new concepts not in the Princeton WordNet, so that we can expand beyond an English-centered world view. By making the data from multiple sources easily available with minimal restrictions, we hope that it will be easier to do research that exploits lexical semantics. In particular, we make the data easily accessible to the original wordnet projects, some of whom have already started to merge it into their own resources. We cannot check the accuracy of data in all languages, nor, for example, check that synsets have the most appropriate lemmas associated with them. Many languages have their own orthographic issues (for example a choice of scripts, or the choice to include vowels or not). Our automatic extraction does not deal with these issues at all. This kind of language specific quality control is best done by the individual wordnet projects. We also consider it important to keep feeding data back to the individual wordnet projects, as much of the innovative research comes from them: the class/instance distinction from PWN; the distinction between rigid and non-rigid synsets from the Kyoto Project; domain mappings from the MultiWordNet (Pianta et al., 2002); representing orthographic variation from the Japanese Wordnet (Kuroda et al., 2011); combining close languages from the Wordnet Bahasa (Nurril Hirfana et al., 2011); and so on. For all of these reasons, we do not consider automatic extraction from/linking to Wiktionary a substitute for building languages specific wordnets. Further work that this data should allow us to do include: automatically producing a list of bad data found in Wiktionary that can be used by Wiktionary editors to correct errors; and finding gaps in wordnet by identifying senses in Wiktionary that have a large number of translations, but fail to have any significant alignment with existing wordnet synsets. 1359 We currently only link through the English Wiktionary and its translations. It should be possible to expand the multilingual wordnet in the same way using Wiktionaries in other languages, which we would expect to improve coverage. Finally, Wiktionary contains a lot of useful information we are not currently using (information on gender, transliterations, pronunciations, alternative spellings and so forth). We can also think of the aligned definitions as a paraphrase corpus for English. We have devoted more space than is usual for a computational linguistics paper to issues of licensing and sustainability. This is deliberate: we feel papers about lexical resources should be clear about licensing, and that it should be considered early on when creating new resources. There are strong arguments that open data leads to better science (Pederson, 2008), and it has been shown that open resources are cited more (Bond and Paik, 2012). In addition, how to maintain resources over time is a major unsolved problem. We consider it important that our wordnet is not just large and accurate but also maintainable and as accessible as possible. 6 Conclusions We have created an open multilingual wordnet with over 26 languages. It is made by combining wordnets with open licences, data from the Unicode Common Locale Data Repository and Wiktionary. Overall there are over 2 million senses for 117,659 concepts, using over 1.4 million words in hundreds of languages. Acknowledgments We would like to thank the following for their help with the evaluation: Le Tuan Anh, Frantiˇsek Kratochv´ıl, Kyonghee Paik, Zina Pozen, Melanie Siegel, Stefanie Stadler, Bilyana Shuman, Liling Tan and Muhammad Zulhelmy bin Mohd Rosman. References Stephen Bird, Ewan Klein, and Edward Loper. 2010. Nyumon Shizen Gengo Shori [Introduction to Natural Language Processing]. O’Reilly. (translated by Hagiwara, Nakamura and Mizuno). W. Black, S. Elkateb, H. Rodriguez, M. Alkhalifa, P. Vossen, A. Pease, M. Bertran, and C. Fellbaum. 2006. The Arabic wordnet project. In Proceedings of LREC 2006. Francis Bond and Kyonghee Paik. 2012. A survey of wordnets and their licenses. In Proceedings of the 6th Global WordNet Conference (GWC 2012). Matsue. 64–71. Jordan Boyd-Graber, Christiane Fellbaum, Daniel Osherson, and Robert Schapire. 2006. Adding dense, weighted connections to WordNet. In Proceedings of the Third Global WordNet Meeting. Jeju. Thatsanee Charoenporn, Virach Sornlerlamvanich, Chumpol Mokarat, and Hitoshi Isahara. 2008. Semi-automatic compilation of Asian WordNet. In 14th Annual Meeting of the Association for Natural Language Processing, pages 1041–1044. Tokyo. Jordi Daude, Lluis Padro, and German Rigau. 2003. Validation and tuning of Wordnet mapping techniques. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP’03). Borovets, Bulgaria. Gerard de Melo and Gerhard Weikum. 2009. Towards a universal wordnet by learning from combined evidence. In Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009), pages 513– 522. ACM, New York, NY, USA. Gerard de Melo and Gerhard Weikum. 2010. Towards universal multilingual knowledge bases. In Pushpak Bhattacharyya, Christiane Fellbaum, and Piek Vossen, editors, Principles, Construction, and Applications of Multilingual Wordnets. Proceedings of the 5th Global WordNet Conference (GWC 2010), pages 149–156. Narosa Publishing, New Delhi, India. Valeria de Paiva and Alexandre Rademaker. 2012. Revisiting a Brazilian wordnet. In Proceedings of the 6th Global WordNet Conference (GWC 2012). Matsue. Christiane Fellbaum and Piek Vossen. 2012. Challenges for a multilingual wordnet. Language Resources and Evaluation, 46(2):313– 326. Doi=10.1007/s10579-012-9186-z. Christine Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press. Aitor Gonzalez-Agirre, Egoitz Laparra, and German Rigau. 2012. Multilingual central repos1360 itory version 3.0: upgrading a very large lexical knowledge base. In Proceedings of the 6th Global WordNet Conference (GWC 2012). Matsue. Val´erie Hanoka and Benoˆıt Sagot. 2012. Wordnet creation and extension made simple: A multilingual lexicon-based approach using wiki resources. In Proceedings of LREC 2012. Istanbul. Chu-Ren Huang, Shu-Kai Hsieh, Jia-Fei Hong, Yun-Zhu Chen, I-Li Su, Yong-Xiang Chen, and Sheng-Wei Huang. 2010. Chinese wordnet: Design and implementation of a cross-lingual knowledge processing infrastructure. Journal of Chinese Information Processing, 24(2):14–23. (in Chinese). Hitoshi Isahara, Francis Bond, Kiyotaka Uchimoto, Masao Utiyama, and Kyoko Kanzaki. 2008. Development of the Japanese WordNet. In Sixth International conference on Language Resources and Evaluation (LREC 2008). Marrakech. Toru Ishida. 2006. Language grid: An infrastructure for intercultural collaboration. In IEEE/IPSJ Symposium on Applications and the Internet (SAINT-06), pages 96–100. URL http://langrid.nict.go.jp/file/ langrid20060211.pdf, (keynote address). Kow Kuroda, Takayuki Kuribayashi, Francis Bond, Kyoko Kanzaki, and Hitoshi Isahara. 2011. Orthographic variants and multilingual sense tagging with the Japanese WordNet. In 17th Annual Meeting of the Association for Natural Language Processing, pages A4–1. Toyohashi. Krister Lind´en and Lauri Carlson. 2010. Finnwordnet — wordnet p˚afinska via ¨overs¨attning. LexicoNordica — Nordic Journal of Lexicography, 17:119–140. In Swedish with an English abstract. John McCrae, Philipp Cimiano, and Elena Montiel-Ponsoda. 2012. Integrating wordnet and wiktionary with lemon. In Christian Chiarcos, Sebastian Nordhoff, and Sebastian Hellman, editors, Linked Data in Linguistics. Springer. Christian M. Meyer and Iryna Gurevych. 2011. What psycholinguists know about chemistry: Aligning wiktionary and wordnet for increased domain coverage. In Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP), pages 883–892. Nurril Hirfana Mohamed Noor, Suerya Sapuan, and Francis Bond. 2011. Creating the open Wordnet Bahasa. In Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation (PACLIC 25), pages 258–267. Singapore. Mortaza Montazery and Heshaam Faili. 2010. Automatic Persian wordnet construction. In 23rd International conference on computational linguistics, pages 846–850. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217–250. NIST. 2012. Secure hash standard (shs). Fips pub 180-4, National Institute of Standards and Technology. Noam Ordan and Shuly Wintner. 2007. Hebrew wordnet: a test case of aligning lexical databases across languages. International Journal of Translation, 19(1):39–58. B.S Pedersen, S. Nimb, J. Asmussen, N. Sørensen, L. Trap-Jensen, and H. Lorentzen. 2009. DanNet — the challenge of compiling a wordnet for Danish by reusing a monolingual dictionary. Language Resources and Evaluation. Ted Pederson. 2008. Empiricism is not a matter of faith. Computational Linguistics, 34(3):465– 470. Emanuele Pianta, Luisa Bentivogli, and Christian Girardi. 2002. Multiwordnet: Developing an aligned multilingual database. In In Proceedings of the First International Conference on Global WordNet, pages 293–302. Mysore, India. Maciej Piasecki, Stan Szpakowicz, and Bartosz Broda. 2009. A Wordnet from the Ground Up. Wroclaw University of Technology Press. URL http://www.plwordnet.pwr.wroc.pl/ main/content/files/publications/A_ Wordnet_from_the_Ground_Up.pdf, (ISBN 978-83-7493-476-3). Chiew Kin Quah, Francis Bond, and Takefumi Yamazaki. 2001. Design and construction of a machine-tractable Malay-English lexicon. 1361 In Asialex 2001 Proceedings, pages 200–205. Seoul. Ervin Ruci. 2008. On the current state of Albanet and related applications. Technical report, University of Vlora. (http://fjalnet.com/ technicalreportalbanet.pdf). Benoˆıt Sagot and Darja Fiˇser. 2008. Building a free French wordnet from multilingual resources. In European Language Resources Association (ELRA), editor, Proceedings of the Sixth International Language Resources and Evaluation (LREC’08). Marrakech, Morocco. Sareewan Thoongsup, Thatsanee Charoenporn, Kergrit Robkop, Tan Sinthurahat, Chumpol Mokarat, Virach Sornlertlamvanich, and Hitoshi Isahara. 2009. Thai wordnet construction. In Proceedings of The 7th Workshop on Asian Language Resources (ALR7), Joint conference of the 47th Annual Meeting of the Association for Computational Linguistics (ACL) and the 4th International Joint Conference on Natural Language Processing (IJCNLP),. Suntec, Singapore. Dan Tufis¸, Dan Cristea, and Sofia Stamou. 2004. BalkaNet: Aims, methods, results and perspectives. a general overview. Romanian Journal of Information Science and Technology, 7(1–2):9– 34. Unicode, Inc. 2012. Unicode, Inc. license agreement - data files and software. http://www. unicode.org/copyright.html. Piek Vossen, editor. 1998. Euro WordNet. Kluwer. Piek Vossen. 2005. Building wordnets. http://www.globalwordnet.org/gwa/ BuildingWordnets.ppt. Wikimedia. 2013. List of wiktionaries. http://meta.wikimedia.org/w/index. php?title=Wiktionary&oldid=4729333. (accessed on 2013-02-14). Wikipedia. 2013. Wikipedia — wikipedia, the free encyclopedia. URL http: //en.wikipedia.org/w/index.php? title=Wikipedia&oldid=552515903, [Online; accessed 30-April-2013]. Renjie Xu, Zhiqiang Gao, Yuzhong Qu, and Zhisheng Huang. 2008. An integrated approach for automatic construction of bilingual ChineseEnglish WordNet. In 3rd Asian Semantic Web Conference (ASWC 2008), pages 302–341. 1362
2013
133
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1363–1373, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics FrameNet on the Way to Babel: Creating a Bilingual FrameNet Using Wiktionary as Interlingual Connection Silvana Hartmann† and Iryna Gurevych†‡ † Ubiquitous Knowledge Processing Lap (UKP-TUDA) Department of Computer Science, Technische Universit¨at Darmstadt ‡ Ubiquitous Knowledge Processing Lap (UKP-DIPF) German Institute for Educational Research and Educational Information www.ukp.tu-darmstadt.de Abstract We present a new bilingual FrameNet lexicon for English and German. It is created through a simple, but powerful approach to construct a FrameNet in any language using Wiktionary as an interlingual representation. Our approach is based on a sense alignment of FrameNet and Wiktionary, and subsequent translation disambiguation into the target language. We perform a detailed evaluation of the created resource and a discussion of Wiktionary as an interlingual connection for the cross-language transfer of lexicalsemantic resources. The created resource is publicly available at http://www. ukp.tu-darmstadt.de/fnwkde/. 1 Introduction FrameNet is a valuable resource for natural language processing (NLP): semantic role labeling (SRL) systems based on FrameNet provide semantic analysis for NLP applications, such as question answering (Narayanan and Harabagiu, 2004; Shi and Mihalcea, 2005) and information extraction (Mohit and Narayanan, 2003). However, their wide deployment has been prohibited by the poor coverage and limited availability of a similar resource in many languages. Expert-built lexical-semantic resources are expensive to create. Previous cross-lingual transfer of FrameNet used corpus-based approaches, or resource alignment with multilingual expert-built resources, such as EuroWordNet. The latter indirectly also suffers from the high cost and constrained coverage of expert-built resources. Recently, collaboratively created resources have been investigated for the multilingual extension of resources in NLP, beginning with Wikipedia (Navigli and Ponzetto, 2010). They rely on the socalled “Wisdom of the Crowds”, contributions by a large number of volunteers, which results in a continuously updated high-quality resource available in hundreds of languages. Due to the encyclopedic nature of Wikipedia, previous work focused on encyclopedic information for Wikipedia entries, i.e., almost exclusively on nouns. This is not enough for resources like FrameNet. Such resources need lexical-semantic information on various POS. For FrameNet, information on the predicates associated with a semantic frame – mostly verbs, nouns, and adjectives – is crucial, for instance gloss or syntactic subcategorization. A solution for the problem of multilingual extension of lexical semantic resources is to use Wiktionary, a collaboratively created dictionary, as connection between languages. It provides high-quality lexical information on all POS, for instance glosses, sense relations, syntactic subcategorization, etc. Like Wikipedia, it is continuously extended and contains translations to hundreds of languages, including low-resource ones. To our knowledge, Wiktionary has not been evaluated as an interlingual index for the cross-lingual extension of lexical-semantic resources. In this paper, we present a novel method for the creation of bilingual FrameNet lexicons based on an alignment to Wiktionary. We demonstrate our method on the language pair English-German and present the resulting resources, a lemma-based multilingual and a sense-disambiguated GermanEnglish FrameNet lexicon. The understanding of lexical-semantic resources and their combinations, e.g., how alignment algorithms can be adapted to individual resource pairs and different POS, is essential for their effective use in NLP and a prerequisite for later in-task evaluation and application. To enhance this understanding for the presented resource pair, we perform a detailed analysis of the created resource and compare it to existing FrameNet resources for German. 1363 The contributions of our work are the following: (1) We create a novel sense alignment between FrameNet and the English Wiktionary. It results in a multilingual FrameNet FNWKxx, which links FrameNet senses to lemmas in 280 languages. (2) We create a sense-disambiguated English-German FrameNet lexicon FNWKde based on FNWKxx and translation disambiguation on the German Wiktionary.1 (3) We analyze the two resources and outline further steps for creating a multilingual FrameNet. This is a major step towards the vision of this paper: a simple, but powerful approach to partially construct a FrameNet in any language using Wiktionary as an interlingual representation. 2 Resource Overview FrameNet (Baker et al., 1998) is an expert-built lexical-semantic resource incorporating the theory of frame-semantics (Fillmore, 1976). It groups word senses in frames that represent particular situations. Thus, the verb complete and the noun completion belong to the Activity finish frame. The participants of these situations, typically realized as syntactic arguments, are the semantic roles of the frame, for instance the Agent performing an activity, or the Activity itself. FrameNet release 1.5 contains 1,015 frames, and 11,942 word senses. Corpus texts annotated with frames and their roles have been used to train automatic SRL systems. Wiktionary is a collaboratively created dictionary available in over 500 language editions. It is continuously extended and revised by a community of volunteer editors. The English language edition contains over 500,000 word senses.2 Wiktionary is organized like a traditional dictionary in lexical entries and word senses. For the word senses, definitions and example sentences, as well as other lexical information, such as register (e.g., colloquial), phonetic transcription, inflection may be available, including language-specific types of information. Senses also provide translations to other languages. These are connected to lexical entries in the respective language editions via hyperlinks. This allows us to use Wiktionary as an interlingual connection between multiple languages. 1The xx in FNWKxx stands for all the languages in the resource. After translation disambiguation in a specific language, xx is replaced by the corresponding language code. 2as of May 2013, see http://en.wiktionary. org/wiki/Wiktionary:Statistics. Figure 1: Method overview. The quality of Wiktionary has been confirmed by Meyer and Gurevych (2012b) who also give an overview on the usage of Wiktionary in NLP applications such as speech synthesis. 3 Method Overview Our method consists of two steps visualized in Fig. 1. In the first step, we create a novel sense alignment between FrameNet and the English Wiktionary following Niemann and Gurevych (2011). Thus, the FrameNet sense of to complete with frame Activity finish is assigned to the sense of to complete in Wiktionary meaning to finish. This step establishes Wiktionary as an interlingual index between FrameNet senses and lemmas in many languages, and builds the foundation for the bilingual FrameNet extension. It results in a basic multilingual FrameNet lexicon FNWKxx with translations to lemmas in 283 languages. An example: by aligning the FrameNet sense of the verb complete with gloss to finish with the corresponding English Wiktionary sense, we collect 39 translations to 22 languages, e.g., the German fertigmachen and the Spanish terminar. The second step is the disambiguation of the translated lemmas with respect to the target language Wiktionary in order to retrieve the linguistic information of the corresponding word sense in the target language Wiktionary (Meyer and Gurevych, 2012a). We evaluate this step for English and German and create the bilingual FrameNet lexicon FNWKde. For the example sense of complete, we extract lexical information for the word sense of its German translation fertigmachen, for instance a German gloss, an example sentence, register information (colloquial), and synonyms, e.g., beenden. As a side-benefit of our method, we also extend the English FrameNet by the linguistic information in Wiktionary. 1364 4 Related Work 4.1 Creating FrameNets in New Languages There are two main lines of research in bootstrapping a FrameNet for languages other than English. The first, corpus-based approach is to automatically extract word senses in the target language based on parallel corpora and frame annotations in the source language. In this vein, Pad´o and Lapata (2005) propose a cross-lingual FrameNet extension to German and French; Johansson and Nugues (2005) and Johansson and Nugues (2006) do this for Spanish and Swedish, and Basili et al. (2009) for Italian. Pad´o and Lapata (2005) observe that their approach suffers from polysemy errors, because lemmas in the source language need to be disambiguated with respect to all the frames they evoke. To alleviate this problem, they use a disambiguation approach based on the most frequent frame; Basili et al. (2009) use distributional methods for frame disambiguation. Our approach is based on sense alignments and therefore explicitly aims to avoid such errors. The second line of work is resource-based: FrameNet is aligned to multilingual resources in order to extract senses in the target language. Using monolingual resources, this approach has also been employed to extend FrameNet coverage for English (Shi and Mihalcea, 2005; Johansson and Nugues, 2007; Ferrandez et al., 2010). De Cao et al. (2008) map FrameNet frames to WordNet synsets based on the embedding of FrameNet lemmas in WordNet. They use MultiWordNet, an English-Italian wordnet, to induce an Italian FrameNet lexicon with 15,000 entries. To create MapNet, Tonelli and Pianta (2009) align FrameNet senses with WordNet synsets by exploiting the textual similarity of their glosses. The similarity measure is based on stem overlap of the candidates’ glosses expanded by WordNet domains, the WordNet synset, and the set of senses for a FrameNet frame. In Tonelli and Pighin (2009), they use these features to train an SVMclassifier to identify valid alignments and report an F1-score of 0.66 on a manually annotated gold standard. They report 4,265 new English senses and 6,429 new Italian senses, which were derived via MultiWordNet. ExtendedWordFramenet (Laparra and Rigau, 2009; Laparra and Rigau, 2010) is also based on the alignment of FrameNet senses to WordNet synsets. The goal is the multilingual coverage extension of FrameNet, which is achieved by linking WordNet to wordnets in other languages (Spanish, Italian, Basque, and Catalan) in the Multilingual Central Repository. For each language, they add more then 10,000 senses to FrameNet. They rely on a knowledge-based word sense disambiguation algorithm to establish the alignment and report F1=0.75 on a gold standard based on Tonelli and Pighin (2009). Tonelli and Giuliano (2009) align FrameNet senses to Wikipedia entries with the goal to extract word senses and example sentences in Italian. Based on Wikipedia, this alignment is restricted to nouns. Subsequent work on Wikipedia and FrameNet follows a different path and tries to enhance the modeling of selectional preferences for FrameNet predicates (Tonelli et al., 2012). Finally, there have been suggestions to combine the corpus-based and the resource-based approaches: Borin et al. (2012) do this for Finnish and Swedish. They prove the feasibility of their approach by creating a preliminary Finnish FrameNet with 2,694 senses. Mouton et al. (2010) directly exploit the translations in the English and French Wiktionary editions to extend the French FrameNet. They match the FrameNet senses to Wiktionary lexical entries, thus encountering the problem of polysemy in the target language. To solve this, they define a set of filters that control how target lemmas are distributed over frames, increasing precision at the expense of recall (P=0.74, R=0.3, F1=0.42). While their approach is in theory applicable to other languages, our approach goes beyond this by laying the ground for simultaneous FrameNet extension in multiple languages via FNWKxx. 4.2 Wiktionary Sense Alignments Collaboratively created resources have become popular for sense alignments for NLP, starting with the alignment between WordNet and Wikipedia (Ruiz-Casado et al., 2005; Ponzetto and Navigli, 2009). Wiktionary has been subject to few alignment efforts: de Melo and Weikum (2009) integrate information from Wiktionary into Universal WordNet. Meyer and Gurevych (2011) map WordNet synsets to Wiktionary senses and show their complementary domain coverage. 1365 5 FrameNet – Wiktionary Alignment 5.1 Alignment Technique We follow the state-of-the-art sense alignment technique introduced by Niemann and Gurevych (2011). They align senses in WordNet to Wikipedia entries in a supervised setting using semantic similarity measures. One reason to use their method was that it allows zero alignments or one-to-many alignments. This is crucial for obtaining a high-quality alignment of heterogeneous resources, such as the presented one, because their sense granularity and coverage can diverge a lot. The alignment algorithm consists of two steps. In the candidate extraction step, we iterate over all FrameNet senses and match them with all senses from Wiktionary which have the same lemma and thus are likely to describe the same sense. This step yields a set of candidate sense pairs Call. In the classification step, a similarity score between the textual information associated with the senses in a candidate pair (e.g., their gloss) is computed and a threshold-based classifier decides for each pair on valid alignments. Niemann and Gurevych (2011) combine two different types of similarity (i) cosine similarity on bag-of-words vectors (COS) and (ii) a personalized PageRank-based similarity measure (PPR). The PPR measure (Agirre and Soroa, 2009) maps the glosses of the two senses to a semantic vector space spanned up by WordNet synsets and then compares them using the chi-square measure. The semantic vectors ppr are computed using the personalized PageRank algorithm on the WordNet graph. They determine the important nodes in the graph as the nodes that a random walker following the edges visits most frequently: ppr = cMppr + (1 −c)vppr, (1) where M is a transition probability matrix between the n WordNet synsets, c is a damping factor, and vppr is a vector of size n representing the probability of jumping to the node i associated with each vi. For personalized PageRank, vppr is initialized in a particular way: the initial weight is distributed equally over the m vector components (i.e., synsets) associated with a word in the sense gloss, other components receive a 0 value. For each similarity measure, Niemann and Gurevych (2011) determine a threshold (tppr and tcos) independently on a manually annotated gold standard. The final alignment decision is the conjunction of two decision functions: a(ss, st) = PPR(ss, st) > tppr& COS(ss, st) > tcos. (2) We differ from Niemann and Gurevych (2011) in that we use a joint training setup which determines tppr and tcos to optimize classification performance directly (as proposed in Gurevych et al. (2012)): (tppr, tcos) = argmax(tppr,tcos)F1(a), (3) where F1 is the maximized evaluation score and a is the decision function in equation (2). 5.2 Candidate Extraction To compile the candidate set, we paired senses from both resources with identical lemma-POS combinations. FrameNet senses are defined by a lemma, a gloss, and a frame. Wiktionary senses are defined by a lemma and a gloss. For the FrameNet sense Activity finish of the verb complete, we find two candidate senses in Wiktionary (to finish and to make whole). There are on average 3.7 candidates per FrameNet sense. The full candidate set Call contains over 44,000 sense pairs and covers 97% of the 11,942 FrameNet senses. 5.3 Gold Standard Creation For the gold standard, we sampled 2,900 candidate pairs from Call. The properties of the gold standard mirror the properties of Call: the sampling preserved the distribution of POS in Call (around 40% verbs and nouns, and 12% adjectives) and the average numbers of candidates per FrameNet sense. This ensures that highly polysemous words as well as words with few senses are selected. Two human raters annotated the sense pairs based on their glosses. The annotation task consisted in a two-class annotation: Do the presented senses have same meaning - (YES/NO). The raters received detailed guidelines and were trained on around 100 sense pairs drawn from the sample. We computed Cohen’s κ to measure the interrater agreement between the two raters. It is κ=0.72 on the full set, which is considered acceptable according to Artstein and Poesio (2008). An additional expert annotator disambiguated ties. For comparison: Meyer and Gurevych (2011) report κ=0.74 for their WordNet – Wiktionary gold standard, and Niemann and Gurevych (2011) 1366 adj noun verb all κ .8 .77 .65 .72 Table 1: Inter-rater agreement. κ=0.87 for their WordNet – Wikipedia gold standard. These gold standards only consist of nouns, which appear to be an easier annotation task than verb senses. This is supported by our analysis of the agreement by POS (see Table 1): the agreement on nouns and adjectives lies between the two agreement scores previously reported on nouns. Thus our annotation is of similar quality. Only the agreement on verbs is slightly below the acceptability threshold of 0.67 (Artstein and Poesio, 2008). The verb senses are very fine-grained and thus present a difficult alignment task. Therefore, we had an expert annotator correct the verbal part of the gold standard set. After removing the training set for the raters, the final gold standard contains 2,789 sense pairs. 28% of these are aligned. 5.4 Alignment Experiments We determined the best setting for the alignment of FrameNet and Wiktionary in a ten-fold crossvalidation on the gold standard. Besides the parameters for the computation of the PPR vectors (we used the publicly available UKB tool by Agirre and Soroa (2009)), the main parameter in the experiments is the textual information that is used to represent the senses. For FrameNet senses, we used the lemma-pos, sense gloss, example sentences, frame name and frame definition as textual features; for Wiktionary senses, we considered lemma-pos, sense gloss, example sentences, hyponyms and synonyms. We computed the similarity scores on tokenized, lemmatized and stop-word-filtered texts. First, we evaluated models for COS and PPR independently based on various combinations of the textual features listed above. We then used the parameter setting of the best-performing single models to train the model that jointly optimizes the thresholds for PPR and COS (see eqn. (5)). In Table 2, we report on the results of the best single models and the best joint model. For the evaluation, we compute precision P, recall R and F1 on the positive class (aligned=true), e.g., precision P is the number of pairs correctly aligned divided by all aligned pairs. We achieved the highest precision and F1-score Evaluation verb noun adj all P Random-1 BL 0.503 0.559 0.661 0.557 WKT-1 BL 0.620 0.664 0.725 0.66 BEST COS 0.639 0.778 0.706 0.703 BEST PPR 0.66 0.754 0.729 0.713 BEST JOINT 0.677 0.766 0.742 0.728 R Random-1 BL 0.471 0.546 0.683 0.540 WKT-1 BL 0.581 0.65 0.75 0.64 BEST COS 0.658 0.758 0.754 0.715 BEST PPR 0.666 0.724 0.754 0.699 BEST JOINT 0.683 0.783 0.83 0.75 F1 Random-1 BL 0.487 0.552 0.672 0.549 WKT-1 BL 0.60 0.657 0.737 0.65 BEST COS 0.648 0.768 0.729 0.709 BEST PPR 0.663 0.739 0.741 0.706 BEST JOINT 0.68 0.775 0.784 0.739 UBound 0.735 0.834 0.864 0.797 Table 2: Alignment performance by POS. for COS using all available features, but excluding FrameNet example sentences because they introduce too much noise. Adding the frame name and frame definition to the often short glosses provides a richer sense representation for the COS measure. The best-performing PPR configuration uses sense gloss and lemma-pos. For the joint model, we employed the best single PPR configuration, and a COS configuration that uses sense gloss extended by Wiktionary hypernyms, synonyms and FrameNet frame name and frame definition, to achieve the highest score, an F1-score of 0.739. 5.5 Gold Standard Evaluation We compared the performance of our alignment on the gold standard to a baseline which randomly selects one target sense from the candidate set of each source sense (Random-1). We also consider the more competitive Wiktionary first sense baseline (WKT-1). It is guided by the heuristic that more frequent senses are listed first in Wiktionary (Meyer and Gurevych, 2010). It is a stronger baseline with an F1-score of 0.65 (see Table 2). To derive the upper bound for the alignment performance (UBound), we computed the F1 score from the average pairwise F1-score of the annotators according to Hripcsak and Rothschild (2005). As the evaluation set mirrors the POS distribution in FrameNet and is sufficiently large, unlike earlier alignments, we can analyze the performance by POS. The BEST JOINT model performs well on nouns, slightly better on adjectives, and worse on verbs, see Table 2. For the baselines and the UBound the same applies, with the difference that adjectives receive even better results 1367 in comparison. This fits in with the perceived degree of difficulty according to the observed polysemy for the POS: for verbs we have many candidate sets with two or more candidates, i.e., we observe higher polysemy, while for nouns and even stronger for adjectives, many small candidate sets occur, which stand for an easier alignment decision. This is in line with the reported higher complexity of lexical resources with respect to verbs and greater difficulty in alignments and word sense disambiguation (Laparra and Rigau, 2010). The performance of BEST JOINT on all POS is F1=0.73, which is significantly higher than the WKT-1 baseline (p<0.05 according to McNemar’s test). The performance on nouns (F1=0.775) is on par with the results reported by Niemann and Gurevych (2011) for nouns (F1=0.78). 5.6 Error Analysis The confusion matrix from the evaluation of BEST JOINT on the gold standard shows 214 false positives and 191 false negatives. The false negatives suffer from low overlap between the glosses, which are often quite short (contend - assert), sometimes circular (sinful - relating to sin). Aligning senses with such glosses is difficult for a system based on semantic similarity. In about 50% of the analyzed pairs, highly similar words are used in the gloss, that we should be able to detect with second-order representations, for instance by expanding short definitions with the definitions of the contained words, or via derivational similarity. A number of false positives occur because the gold standard was developed in a very fine-grained manner: distinctions such as causative vs. inchoative (enlarge: become large vs. enlarge: make large) were explicitly stressed in the definitions and thus annotated as different senses by the annotators. This was motivated by the fact that this distinction is relevant for many frames in FrameNet. The first meaning of enlarge belongs to the frame Expansion, the second to Cause expansion. Our similarity based approach cannot capture such differences well. 6 Intermediate Resource FNWKxx 6.1 Statistics We applied the best system setup to the full candidate set of over 44,000 candidates to create the intermediate resource FNWKxx. The alignment consists of 12,094 sense pairs. It covers 82% of fine-grained P coarse-grained P All POS 0.67 0.78 By POS verb noun adj verb noun adj 0.53 0.73 0.80 0.73 0.82 0.85 Table 3: Post-hoc evaluation (precision P). the senses in FrameNet and 86% of the frames. It connects more than 9,800 unique FrameNet senses with more than 10,000 unique Wiktionary senses, which shows that both non-alignments and multiple alignments occur for some source senses. 6.2 Post-hoc Evaluation Our cross-validation approach entails the danger of over-fitting. In order to verify the quality of the alignment, we performed a detailed post-hoc analysis on a sample of 270 aligned sense pairs randomly drawn from the set of aligned senses. Because sense granularity was an issue in the error analysis, we considered two alignment decisions: (a) fine-grained alignment: the two glosses describe the same sense; (b) coarse-grained alignment. The causative/inchoative distinction is, among others, ignored. The evaluation results are listed in Table 3. The precision for the fine-grained (a) is lower than the allover precision on the gold standard. The evaluation by POS shows that the result for nouns and adjectives is equal or superior to the evaluation result on the gold standard, while it is worse for verbs. This shows that over-fitting, if at all, is only a risk for the verb senses. The allover precision for (b) exceeds the precision on the gold standard. Particularly verbs receive much better results. This shows that a coarse-grained alignment may suffice for the FrameNet extension. This evaluation confirms the quality of the sense alignment, in particular with respect to the FrameNet extension. But it also elicits the question whether a coarse-grained alignment would suffice. We will discuss this question below. 6.3 Resource Analysis For each of the aligned senses in the 12,094 aligned sense pairs, we extracted glosses from Wiktionary. Because FrameNet glosses are often very brief, the additional glosses will benefit algorithms such as frame detection for SRL. We also added 4,352 new example sentences from Wik1368 tionary to FrameNet. We can extract 2,151 new lemma-POS for FrameNet frames from the synonyms of the aligned senses in Wiktionary. We also extract other related lemma-POS, for instance 487 antonyms, 126 hyponyms, and 19 hypernyms. This step establishes Wiktionary as an interlingual connection between FrameNet and a large number of languages, including low-resource ones: via Wiktionary, we connect FrameNet senses to translations in 283 languages, e.g., we translate the sense of the verb complete associated with the frame Activity Finish to the German colloquial fertigmachen, the Spanish terminar, the Turkish tamamlamak, and 19 other languages. For 36 languages, we can extract more than 1,000 translations each, among them low-resource languages such as Telugu, Swahili, or Kurdish. The languages with most translations are: Finnish (9,333), Russian (7,790), and German (6,871). The number of Finnish translations is more than three times larger than the preliminary Finnish FrameNet by Borin et al. (2012). Likewise, we get three times the number of German lemma-POS than provided by the SALSA corpus. 7 Translation Disambiguation 7.1 Disambiguation Method FNWKxx initially does not provide lexicalsemantic information for the German translations: the translations link to a lemma in the German Wiktionary, not a target sense. In order to integrate the information attached to a German Wiktionary sense, e.g., the gloss, into our resource, the lemmas need to be disambiguated. We use the sense-disambiguated Wiktionary resulting from a recently published approach for the disambiguation of relations and translations in Wiktionary (Meyer and Gurevych, 2012a) to create our new bilingual (German-English) FrameNet lexicon FNWKde. Their approach combines information on the source sense and all potential target senses in order to determine the best target sense in a rule-based disambiguation strategy. The information is encoded as binary features, which are ordered in a back-off hierarchy: if the first feature applies, the target sense is selected, otherwise the second feature is considered, and so forth. The most important features are: definition overlap between source and automatically transSALSA2 P&L05 FNWKde Type Corpus Corpus Lexicon Creation Manual Automatic Automatic Frames(+p) 266(907) 468 755 Senses 1,813 9,851 5,897 Examples 24,184 1,672,551 6,933 Glosses 5,897 Table 4: Frame-semantic resources for German. lated target definition; occurrence of the source lemma in the target definition; shared linguistic information (e.g., same register); inverse translation relations (i.e., the source lemma occurs on the translation list of the target sense); relation overlap; Lesk measure between original and translated glosses in source and target language; and finally, backing off to the first target sense. For the gold standard evaluation of the approach we refer to Meyer and Gurevych (2012a): their system obtained an F1-score of 0.67 for the task of disambiguating translations from English to German, and an F1-score of 0.79 for the disambiguation of English sense relations. We use the latter to identify target senses of synonyms in FNWKxx. 8 Resource FNWKde 8.1 Statistics Table 4 gives an overview of FNWKde. It contains 5,897 pairs of German Wiktionary senses and FrameNet senses, i.e., 86% of the translations could be disambiguated. Each sense has a gloss, and there are 6,933 example sentences. Based on the relation disambiguation and inference of new relations by Meyer and Gurevych (2012a), we can also disambiguate synonyms in the English Wiktionary. This leads to a further extension of the English FrameNet summarized in Table 5. The number of Wiktionary senses aligned to FrameNet senses is increased by 50%. We also provide results for other sense relations, e.g., antonyms. We will discuss whether and how they can be integrated as FrameNet senses in our resource below. 8.2 Post-hoc Evaluation Because the errors of two subsequently applied automatic methods can multiply, we provide a posthoc evaluation of the results. To evaluate the quality of the German FrameNet lexicon, we collected the FrameNet senses for a list of 15 frames that were sampled by Pad´o and 1369 # English senses # English senses Relation per FrameNet sense per frame SYNONYM 17,713 13,288 HYPONYM 4,818 3,347 HYPERNYM 6,369 3,961 ANTONYM 9,626 6,737 Table 5: Statistics after relation disambiguation. Lapata (2005) according to three frequency bands on a large corpus. There are 115 senses associated with these frames in our resource. In a manual evaluation of these 115 senses, we find that 67% were assigned correctly to their frames. This is higher than expected, considering the errors from the applied methods add up. Further analysis revealed that both resource creation steps contribute equally to the 39 errors. For 17 of the evaluated sense pairs, redundancy confirms their quality: they were obtained independently by two or three alignment-and-translation paths and do not contain alignment errors. 8.3 Comparison We compare FNWKde to two German framesemantic resources, the manually annotated SALSA corpus (Burchardt et al., 2006) and a resource from Pad´o and Lapata (2005), henceforth P&L05. Note that both resources are frameannotated corpora, while FNWKde is a FrameNetlike lexicon and contains information complementary to the corpora. The different properties of the resources are contrasted in Table 4. The automatically developed resources, including FNWKde, provide a larger number of senses than SALSA. The annotated corpora contain a large number of examples, but they do not provide any glosses, which are useful for frame detection in SRL, nor do they contain any other lexicalsemantic information. FNWKde covers a larger number of FrameNet frames than the other two resources. 266 of the 907 frames in SALSA are connected to original FrameNet frames, the others are newly-developed proto-frames p (shown in parentheses in Table 4). Table 6 describes the proportion of the overlapping frames and senses3 to the respective resources. The numbers on frame overlap show that our resource covers the frames in the other 3Note that the senses in SALSA and P&L05 are defined by frame, lemma, and POS. In Table 6, FNWKde senses with identical frame, lemma, and POS, but different gloss are therefore conflated to one sense. Resource r % of r % of FNWKde Frame SALSA 2 89% 31% P&L05 90% 55% Sense SALSA 2 15% 5% P&L05 10% 19% Table 6: Overlap of FNWKde with resource r. resources well (89% and 90% coverage respectively), and that it adds frames not covered in the other resources: P&L05 only covers 55% of the frames in FNWKde. The sense overlap shows that the resources have senses in common, which confirms the quality of the automatically developed resources, but they also complement each other. FNWKde, for instance, adds 3,041 senses to P&L05. 9 Discussion: a Multilingual FrameNet FNWKxx builds an excellent starting point to create FrameNet lexicons in various languages: the translation counts, for instance 6,871 for German, compare favorably to FrameNet 1.5, which contains 9,700 English lemma-POS. To create those FrameNet lexicons, the translation disambiguation approach used for FNWKde (step 2 in Fig. 1) needs to be adapted to other languages. The approach is in theory applicable to any language, but there are some obstacles: first, it relies on the availability of the target sense in the target language Wiktionary. For many of the top 30 languages in FNWKxx, the Wiktionary editions seem sufficiently large to provide targets for translation disambiguation,4 and they are continuously extended. Second, our approach requires access to the target language Wiktionary, but the data format across Wiktionary language editions is not standardized. Third, the approach requires machine translation into the target language. For languages, where such a tool is not available, we could default to the first-sense-heuristic, or encourage the Wiktionary community to link the translations to their target Wiktionary senses inspired by Sajous et al. (2010). Another issue that applies to all automatic (and also manual) approaches of cross-lingual FrameNet extension is the restricted crosslanguage applicability of frames. Boas (2005) reports that, while many frames are largely 4see overview table at http://www.ukp. tu-darmstadt.de/fnwkde/. 1370 language-independent, other frames receive culture-specific or language-specific interpretations, for example calendars or holidays. Also, fine-grained sense and frame distinctions may be more relevant in one language than in another language. Such granularity differences also led to the addition of proto-frames in SALSA 2 (Rehbein et al., 2012). Therefore, manual correction or extension of a multilingual FrameNet based on FNWKde may be desired for specific applications. In this case, the automatically created FrameNets in other languages are good starting points that can be quickly and efficiently compiled. The quality of the multilingual FNWKxx depends on i) the translations in the interlingual connection Wiktionary, which are manually created, controlled by the community, and therefore reliable, and ii) on the FrameNet–Wiktionary alignment. Therefore, we evaluated our sense alignment method in detail. The alignment reaches state-of-the-art results, and the analysis shows that the method is particularly fit for a coarse-grained alignment. We however find lower performance for verbs in a fine-grained setting. We argue that an improved alignment algorithm, for instance taking subcategorization information into account, can identify the fine-grained distinctions. The post-hoc analysis raised the question of FrameNet frame granularity. Do separate frames exist for causative/inchoative alternations (as Being dry and Cause to be dry for to dry), or do they belong to the same frame (Make noise for to creak and to creak something)? For the coarse-grained frames, fine-grained decisions can be merged in a second classification step. Alternatively, we could map Wiktionary senses directly to frames, and include features that cover the granularity distinctions, e.g., whether the existing senses of a frame show the semantic alternation. We could use the same approach to assign senses to a frame which are derived via sense relations other than synonymy, i.e., for linking antonyms or hyponyms to a frame. Some frames do cover antonymous predicates, others do not. Based on Wiktionary, our approach suffers less from the disadvantages of previous resource-based work, i.e., the constraints of expert-built resources and the lack of lexical information in Wikipedia. Unlike corpus-based approaches for cross-lingual FrameNet extension, our approach does not provide frame-semantic annotations for the example sentences. Our advantage is that we create a FrameNet lexicon with lexical-semantic information in the target language. Example annotations can be additionally obtained via cross-lingual annotation projection (Pad´o and Lapata, 2009), and the lexical information in FNWKde can be used to guide this process. 10 Conclusion The resource-coverage bottleneck for framesemantic resources is particularly severe for less well-resourced languages. We present a simple, but effective approach to solve this problem using the English Wiktionary as an interlingual representation and subsequent translation disambiguation in the target language. We validate our approach on the language pair English-German and discuss the options and requirements for creating FrameNets in further languages. As part of this work, we created the first sense alignment between FrameNet and the English Wiktionary. The resulting resource FNWKxx connects FrameNet senses to over 280 languages. The bilingual English-German FrameNet lexicon FNWKde competes with manually created resources, as shown by a comparison to the SALSA corpus. We make both resources publicly available in the standardized format UBY-LMF (Eckle-Kohler et al., 2012), which supports automatic processing of the resources via the UBY Java API, see http://www.ukp.tu-darmstadt.de/ fnwkde/. We also extended FrameNet by several thousand new English senses from Wiktionary which are provided as part of FNWKde. In our future work, we will evaluate the benefits of the extracted information to SRL. Acknowledgments This work has been supported by the Volkswagen Foundation as part of the LichtenbergProfessorship Program under grant No. I/82806 and by the German Research Foundation under grant No. GU 798/3-1 and grant No. GU 798/9-1. We thank Christian Meyer and Judith-Eckle Kohler for insightful discussions and comments, and Christian Wirth for contributions in the early stage of this project. We also thank the anonymous reviewers for their helpful remarks. 1371 References Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for Word Sense Disambiguation. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 33–41, Athens, Greece. Ron Artstein and Massimo Poesio. 2008. Inter-Coder Agreement for Computational Linguistics. Computational Linguistics, 34(4):555–596. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL’98), pages 86–90, Montreal, Canada. Roberto Basili, Diego Cao, Danilo Croce, Bonaventura Coppola, and Alessandro Moschitti. 2009. Crosslanguage frame semantics transfer in bilingual corpora. In Alexander Gelbukh, editor, Computational Linguistics and Intelligent Text Processing, volume 5449 of Lecture Notes in Computer Science, pages 332–345. Springer Berlin Heidelberg. Hans C. Boas. 2005. Semantic Frames as Interlingual Representations for Multilingual Lexical Databases. International Journal of Lexicography, 18(4):445– 478. Lars Borin, Markus Forsberg, Richard Johansson, Kristiina Muhonen, Tanja Purtonen, and Kaarlo Voionmaa. 2012. Transferring frames: Utilization of linked lexical resources. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 8–15, Montr´eal, Canada. Aljoscha Burchardt, Katrin Erk, Anette Frank, Andrea Kowalski, Sebastian Pado, and Manfred Pinkal. 2006. The SALSA corpus: a German corpus resource for lexical semantics. In Proceedings of the 5th International Conference on Language Resources and Evaluation, pages 969–974, Genoa, Italy. Diego De Cao, Danilo Croce, Marco Pennacchiotti, and Roberto Basili. 2008. Combining word sense and usage for modeling frame semantics. In Proceedings of the 2008 Conference on Semantics in Text Processing, STEP ’08, pages 85–101, Stroudsburg, PA, USA. Gerard de Melo and Gerhard Weikum. 2009. Towards a universal wordnet by learning from combined evidence. In Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009), pages 513–522, New York, NY, USA. Judith Eckle-Kohler, Iryna Gurevych, Silvana Hartmann, Michael Matuschek, and Christian M. Meyer. 2012. UBY-LMF - A Uniform Model for Standardizing Heterogeneous Lexical-Semantic Resources in ISO-LMF. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12), pages 275–282, Istanbul, Turkey. Oscar Ferrandez, Michael Ellsworth, Rafael Munoz, and Collin F. Baker. 2010. Aligning FrameNet and WordNet based on Semantic Neighborhoods. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), pages 310–314, Valletta, Malta. Charles J. Fillmore. 1976. Frame Semantics and the Nature of Language. In Annuals of the New York Academy of Sciences: Conference on the Origin and Development of Language and Speech, volume 280, pages 20–32. New York Academy of Sciences, New York, NY, USA. Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M. Meyer, and Christian Wirth. 2012. Uby - A Large-Scale Unified Lexical-Semantic Resource Based on LMF. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012), pages 580–590, Avignon, France. George Hripcsak and Adam S. Rothschild. 2005. Agreement, the F-Measure, and Reliability in Information Retrieval. Journal of the American Medical Informatics Association, 12(3):296–298. Richard Johansson and Pierre Nugues. 2005. Using Parallel Corpora for Automatic Transfer of FrameNet Annotation. In Proceedings of the 1st ROMANCE FrameNet Workshop, Cluj-Napoca, Romania. Richard Johansson and Pierre Nugues. 2006. A framenet-based semantic role labeler for swedish. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 436–443, Sydney, Australia, July. Richard Johansson and Pierre Nugues. 2007. Using WordNet to extend FrameNet coverage. In Proceedings of the Workshop on Building Frame-semantic Resources for Scandinavian and Baltic Languages, at NODALIDA, pages 27–30, Tartu, Estonia. Egoitz Laparra and German Rigau. 2009. Integrating WordNet and FrameNet using a Knowledge-based Word Sense Disambiguation Algorithm. In Proceedings of the International Conference RANLP2009, pages 208–213, Borovets, Bulgaria. Egoitz Laparra and German Rigau. 2010. eXtended WordFrameNet. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), pages 1214–1419, Valletta, Malta. Christian M. Meyer and Iryna Gurevych. 2010. How Web Communities Analyze Human Language: Word Senses in Wiktionary. In Proceedings of the Second Web Science Conference, Raleigh, NC, USA. 1372 Christian M. Meyer and Iryna Gurevych. 2011. What Psycholinguists Know About Chemistry: Aligning Wiktionary and WordNet for Increased Domain Coverage. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 883–892, Chiang Mai, Thailand. Christian M. Meyer and Iryna Gurevych. 2012a. To Exhibit is not to Loiter: A Multilingual, SenseDisambiguated Wiktionary for Measuring Verb Similarity. In Proceedings of COLING 2012, pages 1763–1780, Mumbai, India. Christian M. Meyer and Iryna Gurevych. 2012b. Wiktionary: A new rival for expert-built lexicons? Exploring the possibilities of collaborative lexicography. In Sylviane Granger and Magali Paquot, editors, Electronic Lexicography, pages 259–291. Oxford University Press, Oxford. Behrang Mohit and Srini Narayanan. 2003. Semantic Extraction with Wide-Coverage Lexical Resources. In Proceedings of HLT-NAACL 2003: Companion Volume, pages 64–66, Edmonton, Canada. Claire Mouton, Ga¨el de Chalendar, and Benoˆıt Richert. 2010. FrameNet Translation Using Bilingual Dictionaries with Evaluation on the English-French Pair. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), pages 20–27, Valletta, Malta. Srini Narayanan and Sanda Harabagiu. 2004. Question Answering Based on Semantic Structures. In Proceedings of the 20th international conference on Computational Linguistics - COLING ’04, pages 693–701, Geneva, Switzerland. Roberto Navigli and Simone Paolo Ponzetto. 2010. Babelnet: Building a very large multilingual semantic network. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 216–225, Uppsala, Sweden. Elisabeth (geb. Wolf) Niemann and Iryna Gurevych. 2011. The People’s Web meets Linguistic Knowledge: Automatic Sense Alignment of Wikipedia and WordNet. In Proceedings of the International Conference on Computational Semantics (IWCS), pages 205–214, Singapore. Sebastian Pad´o and Mirella Lapata. 2005. Crosslingual bootstrapping of semantic lexicons: the case of FrameNet. In Proceedings of the 20th national conference on Artificial intelligence - Volume 3, AAAI’05, pages 1087–1092, Pittsburgh, PA, USA. Sebastian Pad´o and Mirella Lapata. 2009. Crosslingual Annotation Projection for Semantic Roles. Journal of Artificial Intelligence Research, 36:307– 340. Simone Paolo Ponzetto and Roberto Navigli. 2009. Large-Scale Taxonomy Mapping for Restructuring and Integrating Wikipedia. In Proceedings of the 21st International Joint Conference on AI, pages 2083–2088, Pasadena, CA, USA. Ines Rehbein, Joseph Ruppenhofer, Caroline Sporleder, and Manfred Pinkal. 2012. Adding nominal spice to SALSA - frame-semantic annotation of German nouns and verbs. In Proceedings of the 11th Conference on Natural Language Processing (KONVENS’12), pages 89–97, Vienna, Austria. Maria Ruiz-Casado, Enrique Alfonseca, and Pablo Castells. 2005. Automatic Assignment of Wikipedia Encyclopedic Entries to WordNet Synsets. In Advances in Web Intelligence, volume 3528 of Lecture Notes in Computer Science, pages 380–386. Springer, Berlin Heidelberg. Franck Sajous, Emmanuel Navarro, Bruno Gaume, Laurent Pr´evot, and Yannick Chudy. 2010. Semiautomatic endogenous enrichment of collaboratively constructed lexical resources: piggybacking onto wiktionary. In Proceedings of the 7th international conference on Advances in natural language processing, IceTAL’10, pages 332–344. Springer, Berlin, Heidelberg. Lei Shi and Rada Mihalcea. 2005. Putting pieces together: Combining FrameNet, VerbNet and WordNet for robust semantic parsing. In Computational Linguistics and Intelligent Text Processing, pages 100–111. Springer, Berlin Heidelberg. Sara Tonelli and Claudio Giuliano. 2009. Wikipedia as frame information repository. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 276–285, Singapore. Sara Tonelli and Emanuele Pianta. 2009. A novel approach to mapping FrameNet lexical units to WordNet synsets. In IWCS-8 ’09: Proceedings of the Eighth International Conference on Computational Semantics, pages 342–345, Tilburg, The Netherlands. Sara Tonelli and Daniele Pighin. 2009. New Features for FrameNet - WordNet Mapping. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 219– 227, Boulder, CO, USA. Sara Tonelli, Volha Bryl, Claudio Giuliano, and Luciano Serafini. 2012. Investigating the semantics of frame elements. In Knowledge Engineering and Knowledge Management, volume 7603 of Lecture Notes in Computer Science, pages 130–143. Springer Berlin Heidelberg. 1373
2013
134
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1374–1383, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Dirt Cheap Web-Scale Parallel Text from the Common Crawl Jason R. Smith1,2 [email protected] Philipp Koehn3 [email protected] Herve Saint-Amand3 [email protected] Chris Callison-Burch1,2,5 [email protected] ∗ Magdalena Plamada4 [email protected] Adam Lopez1,2 [email protected] 1Department of Computer Science, Johns Hopkins University 2Human Language Technology Center of Excellence, Johns Hopkins University 3School of Informatics, University of Edinburgh 4Institute of Computational Linguistics, University of Zurich 5Computer and Information Science Department, University of Pennsylvania Abstract Parallel text is the fuel that drives modern machine translation systems. The Web is a comprehensive source of preexisting parallel text, but crawling the entire web is impossible for all but the largest companies. We bring web-scale parallel text to the masses by mining the Common Crawl, a public Web crawl hosted on Amazon’s Elastic Cloud. Starting from nothing more than a set of common two-letter language codes, our open-source extension of the STRAND algorithm mined 32 terabytes of the crawl in just under a day, at a cost of about $500. Our large-scale experiment uncovers large amounts of parallel text in dozens of language pairs across a variety of domains and genres, some previously unavailable in curated datasets. Even with minimal cleaning and filtering, the resulting data boosts translation performance across the board for five different language pairs in the news domain, and on open domain test sets we see improvements of up to 5 BLEU. We make our code and data available for other researchers seeking to mine this rich new data resource.1 1 Introduction A key bottleneck in porting statistical machine translation (SMT) technology to new languages and domains is the lack of readily available parallel corpora beyond curated datasets. For a handful of language pairs, large amounts of parallel data ∗This research was conducted while Chris CallisonBurch was at Johns Hopkins University. 1github.com/jrs026/CommonCrawlMiner are readily available, ordering in the hundreds of millions of words for Chinese-English and ArabicEnglish, and in tens of millions of words for many European languages (Koehn, 2005). In each case, much of this data consists of government and news text. However, for most language pairs and domains there is little to no curated parallel data available. Hence discovery of parallel data is an important first step for translation between most of the world’s languages. The Web is an important source of parallel text. Many websites are available in multiple languages, and unlike other potential sources— such as multilingual news feeds (Munteanu and Marcu, 2005) or Wikipedia (Smith et al., 2010)— it is common to find document pairs that are direct translations of one another. This natural parallelism simplifies the mining task, since few resources or existing corpora are needed at the outset to bootstrap the extraction process. Parallel text mining from the Web was originally explored by individuals or small groups of academic researchers using search engines (Nie et al., 1999; Chen and Nie, 2000; Resnik, 1999; Resnik and Smith, 2003). However, anything more sophisticated generally requires direct access to web-crawled documents themselves along with the computing power to process them. For most researchers, this is prohibitively expensive. As a consequence, web-mined parallel text has become the exclusive purview of large companies with the computational resources to crawl, store, and process the entire Web. To put web-mined parallel text back in the hands of individual researchers, we mine parallel text from the Common Crawl, a regularly updated 81-terabyte snapshot of the public internet hosted 1374 on Amazon’s Elastic Cloud (EC2) service.2 Using the Common Crawl completely removes the bottleneck of web crawling, and makes it possible to run algorithms on a substantial portion of the web at very low cost. Starting from nothing other than a set of language codes, our extension of the STRAND algorithm (Resnik and Smith, 2003) identifies potentially parallel documents using cues from URLs and document content (§2). We conduct an extensive empirical exploration of the web-mined data, demonstrating coverage in a wide variety of languages and domains (§3). Even without extensive pre-processing, the data improves translation performance on strong baseline news translation systems in five different language pairs (§4). On general domain and speech translation tasks where test conditions substantially differ from standard government and news training text, web-mined training data improves performance substantially, resulting in improvements of up to 1.5 BLEU on standard test sets, and 5 BLEU on test sets outside of the news domain. 2 Mining the Common Crawl The Common Crawl corpus is hosted on Amazon’s Simple Storage Service (S3). It can be downloaded to a local cluster, but the transfer cost is prohibitive at roughly 10 cents per gigabyte, making the total over $8000 for the full dataset.3 However, it is unnecessary to obtain a copy of the data since it can be accessed freely from Amazon’s Elastic Compute Cloud (EC2) or Elastic MapReduce (EMR) services. In our pipeline, we perform the first step of identifying candidate document pairs using Amazon EMR, download the resulting document pairs, and perform the remaining steps on our local cluster. We chose EMR because our candidate matching strategy fit naturally into the Map-Reduce framework (Dean and Ghemawat, 2004). Our system is based on the STRAND algorithm (Resnik and Smith, 2003): 1. Candidate pair selection: Retrieve candidate document pairs from the CommonCrawl corpus. 2. Structural Filtering: (a) Convert the HTML of each document 2commoncrawl.org 3http://aws.amazon.com/s3/pricing/ into a sequence of start tags, end tags, and text chunks. (b) Align the linearized HTML of candidate document pairs. (c) Decide whether to accept or reject each pair based on features of the alignment. 3. Segmentation: For each text chunk, perform sentence and word segmentation. 4. Sentence Alignment: For each aligned pair of text chunks, perform the sentence alignment method of Gale and Church (1993). 5. Sentence Filtering: Remove sentences that appear to be boilerplate. Candidate Pair Selection We adopt a strategy similar to that of Resnik and Smith (2003) for finding candidate parallel documents, adapted to the parallel architecture of Map-Reduce. The mapper operates on each website entry in the CommonCrawl data. It scans the URL string for some indicator of its language. Specifically, we check for: 1. Two/three letter language codes (ISO-639). 2. Language names in English and in the language of origin. If either is present in a URL and surrounded by non-alphanumeric characters, the URL is identified as a potential match and the mapper outputs a key value pair in which the key is the original URL with the matching string replaced by *, and the value is the original URL, language name, and full HTML of the page. For example, if we encounter the URL www.website.com/fr/, we output the following. • Key: www.website.com/*/ • Value: www.website.com/fr/, French, (full website entry) The reducer then receives all websites mapped to the same “language independent” URL. If two or more websites are associated with the same key, the reducer will output all associated values, as long as they are not in the same language, as determined by the language identifier in the URL. This URL-based matching is a simple and inexpensive solution to the problem of finding candidate document pairs. The mapper will discard 1375 most, and neither the mapper nor the reducer do anything with the HTML of the documents aside from reading and writing them. This approach is very simple and likely misses many good potential candidates, but has the advantage that it requires no information other than a set of language codes, and runs in time roughly linear in the size of the dataset. Structural Filtering A major component of the STRAND system is the alignment of HTML documents. This alignment is used to determine which document pairs are actually parallel, and if they are, to align pairs of text blocks within the documents. The first step of structural filtering is to linearize the HTML. This means converting its DOM tree into a sequence of start tags, end tags, and chunks of text. Some tags (those usually found within text, such as “font” and “a”) are ignored during this step. Next, the tag/chunk sequences are aligned using dynamic programming. The objective of the alignment is to maximize the number of matching items. Given this alignment, Resnik and Smith (2003) define a small set of features which indicate the alignment quality. They annotated a set of document pairs as parallel or non-parallel, and trained a classifier on this data. We also annotated 101 Spanish-English document pairs in this way and trained a maximum entropy classifier. However, even when using the best performing subset of features, the classifier only performed as well as a naive classifier which labeled every document pair as parallel, in both accuracy and F1. For this reason, we excluded the classifier from our pipeline. The strong performance of the naive baseline was likely due to the unbalanced nature of the annotated data— 80% of the document pairs that we annotated were parallel. Segmentation The text chunks from the previous step may contain several sentences, so before the sentence alignment step we must perform sentence segmentation. We use the Punkt sentence splitter from NLTK (Loper and Bird, 2002) to perform both sentence and word segmentation on each text chunk. Sentence Alignment For each aligned text chunk pair, we perform sentence alignment using the algorithm of Gale and Church (1993). Sentence Filtering Since we do not perform any boilerplate removal in earlier steps, there are many sentence pairs produced by the pipeline which contain menu items or other bits of text which are not useful to an SMT system. We avoid performing any complex boilerplate removal and only remove segment pairs where either the source and target text are identical, or where the source or target segments appear more than once in the extracted corpus. 3 Analysis of the Common Crawl Data We ran our algorithm on the 2009-2010 version of the crawl, consisting of 32.3 terabytes of data. Since the full dataset is hosted on EC2, the only cost to us is CPU time charged by Amazon, which came to a total of about $400, and data storage/transfer costs for our output, which came to roughly $100. For practical reasons we split the run into seven subsets, on which the full algorithm was run independently. This is different from running a single Map-Reduce job over the entire dataset, since websites in different subsets of the data cannot be matched. However, since the data is stored as it is crawled, it is likely that matching websites will be found in the same split of the data. Table 1 shows the amount of raw parallel data obtained for a large selection of language pairs. As far as we know, ours is the first system built to mine parallel text from the Common Crawl. Since the resource is new, we wanted to understand the quantity, quality, and type of data that we are likely to obtain from it. To this end, we conducted a number of experiments to measure these features. Since our mining heuristics are very simple, these results can be construed as a lower bound on what is actually possible. 3.1 Recall Estimates Our first question is about recall: of all the possible parallel text that is actually available on the Web, how much does our algorithm actually find in the Common Crawl? Although this question is difficult to answer precisely, we can estimate an answer by comparing our mined URLs against a large collection of previously mined URLs that were found using targeted techniques: those in the French-English Gigaword corpus (Callison-Burch et al., 2011). We found that 45% of the URL pairs would 1376 French German Spanish Russian Japanese Chinese Segments 10.2M 7.50M 5.67M 3.58M 1.70M 1.42M Source Tokens 128M 79.9M 71.5M 34.7M 9.91M 8.14M Target Tokens 118M 87.5M 67.6M 36.7M 19.1M 14.8M Arabic Bulgarian Czech Korean Tamil Urdu Segments 1.21M 909K 848K 756K 116K 52.1K Source Tokens 13.1M 8.48M 7.42M 6.56M 1.01M 734K Target Tokens 13.5M 8.61M 8.20M 7.58M 996K 685K Bengali Farsi Telugu Somali Kannada Pashto Segments 59.9K 44.2K 50.6K 52.6K 34.5K 28.0K Source Tokens 573K 477K 336K 318K 305K 208K Target Tokens 537K 459K 358K 325K 297K 218K Table 1: The amount of parallel data mined from CommonCrawl for each language paired with English. Source tokens are counts of the foreign language tokens, and target tokens are counts of the English language tokens. have been discovered by our heuristics, though we actually only find 3.6% of these URLs in our output.4 If we had included “f” and “e” as identifiers for French and English respectively, coverage of the URL pairs would increase to 74%. However, we chose not to include single letter identifiers in our experiments due to the high number of false positives they generated in preliminary experiments. 3.2 Precision Estimates Since our algorithms rely on cues that are mostly external to the contents of the extracted data and have no knowledge of actual languages, we wanted to evaluate the precision of our algorithm: how much of the mined data actually consists of parallel sentences? To measure this, we conducted a manual analysis of 200 randomly selected sentence pairs for each of three language pairs. The texts are heterogeneous, covering several topical domains like tourism, advertising, technical specifications, finances, e-commerce and medicine. For GermanEnglish, 78% of the extracted data represent perfect translations, 4% are paraphrases of each other (convey a similar meaning, but cannot be used for SMT training) and 18% represent misalignments. Furthermore, 22% of the true positives are potentially machine translations (judging by the quality), whereas in 13% of the cases one of the sentences contains additional content not ex4The difference is likely due to the coverage of the CommonCrawl corpus. pressed in the other. As for the false positives, 13.5% of them have either the source or target sentence in the wrong language, and the remaining ones representing failures in the alignment process. Across three languages, our inspection revealed that around 80% of randomly sampled data appeared to contain good translations (Table 2). Although this analysis suggests that language identification and SMT output detection (Venugopal et al., 2011) may be useful additions to the pipeline, we regard this as reasonably high precision for our simple algorithm. Language Precision Spanish 82% French 81% German 78% Table 2: Manual evaluation of precision (by sentence pair) on the extracted parallel data for Spanish, French, and German (paired with English). In addition to the manual evaluation of precision, we applied language identification to our extracted parallel data for several additional languages. We used the “langid.py” tool (Lui and Baldwin, 2012) at the segment level, and report the percentage of sentence pairs where both sentences were recognized as the correct language. Table 3 shows our results. Comparing against our manual evaluation from Table 2, it appears that many sentence pairs are being incorrectly judged as nonparallel. This is likely because language identification tends to perform poorly on short segments. 1377 French German Spanish Arabic 63% 61% 58% 51% Chinese Japanese Korean Czech 50% 48% 48% 47% Russian Urdu Bengali Tamil 44% 31% 14% 12% Kannada Telugu Kurdish 12% 6.3% 2.9% Table 3: Automatic evaluation of precision through language identification for several languages paired with English. 3.3 Domain Name and Topic Analysis Although the above measures tell us something about how well our algorithms perform in aggregate for specific language pairs, we also wondered about the actual contents of the data. A major difficulty in applying SMT even on languages for which we have significant quantities of parallel text is that most of that parallel text is in the news and government domains. When applied to other genres, such systems are notoriously brittle. What kind of genres are represented in the Common Crawl data? We first looked at the domain names which contributed the most data. Table 4 gives the top five domains by the number of tokens. The top two domain names are related to travel, and they account for about 10% of the total data. We also applied Latent Dirichlet Allocation (LDA; Blei et al., 2003) to learn a distribution over latent topics in the extracted data, as this is a popular exploratory data analysis method. In LDA a topic is a unigram distribution over words, and each document is modeled as a distribution over topics. To create a set of documents from the extracted CommonCrawl data, we took the English side of the extracted parallel segments for each URL in the Spanish-English portion of the data. This gave us a total of 444, 022 documents. In our first experiment, we used the MALLET toolkit (McCallum, 2002) to generate 20 topics, which are shown in Table 5. Some of the topics that LDA finds correspond closely with specific domains, such as topics 1 (blingee.com) and 2 (opensubtitles.org). Several of the topics correspond to the travel domain. Foreign stop words appear in a few of the topics. Since our system does not include any language identification, this is not surprising.5 However it does suggest an avenue for possible improvement. In our second LDA experiment, we compared our extracted CommonCrawl data with Europarl. We created a set of documents from both CommonCrawl and Europarl, and again used MALLET to generate 100 topics for this data.6 We then labeled each document by its most likely topic (as determined by that topic’s mixture weights), and counted the number of documents from Europarl and CommonCrawl for which each topic was most prominent. While this is very rough, it gives some idea of where each topic is coming from. Table 6 shows a sample of these topics. In addition to exploring topics in the datasets, we also performed additional intrinsic evaluation at the domain level, choosing top domains for three language pairs. We specifically classified sentence pairs as useful or boilerplate (Table 7). Among our observations, we find that commercial websites tend to contain less boilerplate material than encyclopedic websites, and that the ratios tend to be similar across languages in the same domain. FR ES DE www.booking.com 52% 71% 52% www.hotel.info 34% 44% memory-alpha.org 34% 25% 55% Table 7: Percentage of useful (non-boilerplate) sentences found by domain and language pair. hotel.info was not found in our GermanEnglish data. 4 Machine Translation Experiments For our SMT experiments, we use the Moses toolkit (Koehn et al., 2007). In these experiments, a baseline system is trained on an existing parallel corpus, and the experimental system is trained on the baseline corpus plus the mined parallel data. In all experiments we include the target side of the mined parallel data in the language model, in order to distinguish whether results are due to influences from parallel or monolingual data. 5We used MALLET’s stop word removal, but that is only for English. 6Documents were created from Europarl by taking “SPEAKER” tags as document boundaries, giving us 208,431 documents total. 1378 Genre Domain Pages Segments Source Tokens Target Tokens Total 444K 5.67M 71.5M 67.5M travel www.booking.com 13.4K 424K 5.23M 5.14M travel www.hotel.info 9.05K 156K 1.93M 2.13M government www.fao.org 2.47K 60.4K 1.07M 896K religious scriptures.lds.org 7.04K 47.2K 889K 960K political www.amnesty.org 4.83K 38.1K 641K 548K Table 4: The top five domains from the Spanish-English portion of the data. The domains are ranked by the combined number of source and target tokens. Index Most Likely Tokens 1 glitter graphics profile comments share love size girl friends happy blingee cute anime twilight sexy emo 2 subtitles online web users files rar movies prg akas dwls xvid dvdrip avi results download eng cd movie 3 miles hotels city search hotel home page list overview select tokyo discount destinations china japan 4 english language students details skype american university school languages words england british college 5 translation japanese english chinese dictionary french german spanish korean russian italian dutch 6 products services ni system power high software design technology control national applications industry 7 en de el instructions amd hyper riv saab kfreebsd poland user fr pln org wikimedia pl commons fran norway 8 information service travel services contact number time account card site credit company business terms 9 people time life day good years work make god give lot long world book today great year end things 10 show km map hotels de hotel beach spain san italy resort del mexico rome portugal home santa berlin la 11 rotary international world club korea foundation district business year global hong kong president ri 12 hotel reviews stay guest rooms service facilities room smoking submitted customers desk score united hour 13 free site blog views video download page google web nero internet http search news links category tv 14 casino game games play domaine ago days music online poker free video film sports golf live world tags bet 15 water food attribution health mango japan massage medical body baby natural yen commons traditional 16 file system windows server linux installation user files set debian version support program install type 17 united kingdom states america house london street park road city inn paris york st france home canada 18 km show map hotels hotel featured search station museum amsterdam airport centre home city rue germany 19 hotel room location staff good breakfast rooms friendly nice clean great excellent comfortable helpful 20 de la en le el hotel es het del und die il est der les des das du para Table 5: A list of 20 topics generated using the MALLET toolkit (McCallum, 2002) and their most likely tokens. 4.1 News Domain Translation Our first set of experiments are based on systems built for the 2012 Workshop on Statistical Machine Translation (WMT) (Callison-Burch et al., 2012) using all available parallel and monolingual data for that task, aside from the French-English Gigaword. In these experiments, we use 5-gram language models when the target language is English or German, and 4-gram language models for French and Spanish. We tune model weights using minimum error rate training (MERT; Och, 2003) on the WMT 2008 test data. The results are given in Table 8. For all language pairs and both test sets (WMT 2011 and WMT 2012), we show an improvement of around 0.5 BLEU. We also included the French-English Gigaword in separate experiments given in Table 9, and Table 10 compares the sizes of the datasets used. These results show that even on top of a different, larger parallel corpus mined from the web, adding CommonCrawl data still yields an improvement. 4.2 Open Domain Translation A substantial appeal of web-mined parallel data is that it might be suitable to translation of domains other than news, and our topic modeling analysis (§3.3) suggested that this might indeed be the case. We therefore performed an additional set of experiments for Spanish-English, but we include test sets from outside the news domain. 1379 Europarl CommonCrawl Most Likely Tokens 9 2975 hair body skin products water massage treatment natural oil weight acid plant 2 4383 river mountain tour park tours de day chile valley ski argentina national peru la 8 10377 ford mercury dealer lincoln amsterdam site call responsible affiliates displayed 7048 675 market services european competition small public companies sector internal 9159 1359 time president people fact make case problem clear good put made years situation 13053 849 commission council european parliament member president states mr agreement 1660 5611 international rights human amnesty government death police court number torture 1617 4577 education training people cultural school students culture young information Table 6: A sample of topics along with the number of Europarl and CommonCrawl documents where they are the most likely topic in the mixture. We include topics that are mostly found in Europarl or CommonCrawl, and some that are somewhat prominent in both. WMT 11 FR-EN EN-FR ES-EN EN-ES EN-DE Baseline 30.46 29.96 30.79 32.41 16.12 +Web Data 30.92 30.51 31.05 32.89 16.74 WMT 12 FR-EN EN-FR ES-EN EN-ES EN-DE Baseline 29.25 27.92 32.80 32.83 16.61 +Web Data 29.82 28.22 33.39 33.41 17.30 Table 8: BLEU scores for several language pairs before and after adding the mined parallel data to systems trained on data from WMT data. WMT 11 FR-EN EN-FR Baseline 30.96 30.69 +Web Data 31.24 31.17 WMT 12 FR-EN EN-FR Baseline 29.88 28.50 +Web Data 30.08 28.76 Table 9: BLEU scores for French-English and English-French before and after adding the mined parallel data to systems trained on data from WMT data including the French-English Gigaword (Callison-Burch et al., 2011). For these experiments, we also include training data mined from Wikipedia using a simplified version of the sentence aligner described by Smith et al. (2010), in order to determine how the effect of such data compares with the effect of webmined data. The baseline system was trained using only the Europarl corpus (Koehn, 2005) as parallel data, and all experiments use the same language model trained on the target sides of Europarl, the English side of all linked SpanishEnglish Wikipedia articles, and the English side of the mined CommonCrawl data. We use a 5gram language model and tune using MERT (Och, Corpus EN-FR EN-ES EN-DE News Commentary 2.99M 3.43M 3.39M Europarl 50.3M 49.2M 47.9M United Nations 316M 281M FR-EN Gigaword 668M CommonCrawl 121M 68.8M 88.4M Table 10: The size (in English tokens) of the training corpora used in the SMT experiments from Tables 8 and 9 for each language pair. 2003) on the WMT 2009 test set. Unfortunately, it is difficult to obtain meaningful results on some open domain test sets such as the Wikipedia dataset used by Smith et al. (2010). Wikipedia copied across the public internet, and we did not have a simple way to filter such data from our mined datasets. We therefore considered two tests that were less likely to be problematic. The Tatoeba corpus (Tiedemann, 2009) is a collection of example sentences translated into many languages by volunteers. The front page of tatoeba.org was discovered by our URL matching heuristics, but we excluded any sentence pairs that were found in the CommonCrawl data from this test set. 1380 The second dataset is a set of crowdsourced translation of Spanish speech transcriptions from the Spanish Fisher corpus.7 As part of a research effort on cross-lingual speech applications, we obtained English translations of the data using Amazon Mechanical Turk, following a protocol similar to one described by Zaidan and CallisonBurch (2011): we provided clear instructions, employed several quality control measures, and obtained redundant translations of the complete dataset (Lopez et al., 2013). The advantage of this data for our open domain translation test is twofold. First, the Fisher dataset consists of conversations in various Spanish dialects on a wide variety of prompted topics. Second, because we obtained the translations ourselves, we could be absolutely assured that they did not appear in some form anywhere on the Web, making it an ideal blind test. WMT10 Tatoeba Fisher Europarl 89/72/46/20 94/75/45/18 87/69/39/13 +Wiki 92/78/52/24 96/80/50/21 91/75/44/15 +Web 96/82/56/27 99/88/58/26 96/83/51/19 +Both 96/84/58/29 99/89/60/27 96/83/52/20 Table 11: n-gram coverage percentages (up to 4grams) of the source side of our test sets given our different parallel training corpora computed at the type level. WMT10 Tatoeba Fisher Europarl 27.21 36.13 46.32 +Wiki 28.03 37.82 49.34 +Web 28.50 41.07 51.13 +Both 28.74 41.12 52.23 Table 12: BLEU scores for Spanish-English before and after adding the mined parallel data to a baseline Europarl system. We used 1000 sentences from each of the Tatoeba and Fisher datasets as test. For comparison, we also test on the WMT 2010 test set (Callison-Burch et al., 2010). Following Munteanu and Marcu (2005), we show the n-gram coverage of each corpus (percentage of n-grams from the test corpus which are also found in the training corpora) in Table 11. Table 12 gives end-to-end results, which show a strong improvement on the WMT test set (1.5 BLEU), and larger 7Linguistic Data Consortium LDC2010T04. improvements on Tatoeba and Fisher (almost 5 BLEU). 5 Discussion Web-mined parallel texts have been an exclusive resource of large companies for several years. However, when web-mined parallel text is available to everyone at little or no cost, there will be much greater potential for groundbreaking research to come from all corners. With the advent of public services such as Amazon Web Services and the Common Crawl, this may soon be a reality. As we have shown, it is possible to obtain parallel text for many language pairs in a variety of domains very cheaply and quickly, and in sufficient quantity and quality to improve statistical machine translation systems. However, our effort has merely scratched the surface of what is possible with this resource. We will make our code and data available so that others can build on these results. Because our system is so simple, we believe that our results represent lower bounds on the gains that should be expected in performance of systems previously trained only on curated datasets. There are many possible means through which the system could be improved, including more sophisticated techniques for identifying matching URLs, better alignment, better language identification, better filtering of data, and better exploitation of resulting cross-domain datasets. Many of the components of our pipeline were basic, leaving considerable room for improvement. For example, the URL matching strategy could easily be improved for a given language pair by spending a little time crafting regular expressions tailored to some major websites. Callison-Burch et al. (2011) gathered almost 1 trillion tokens of French-English parallel data this way. Another strategy for mining parallel webpage pairs is to scan the HTML for links to the same page in another language (Nie et al., 1999). Other, more sophisticated techniques may also be possible. Uszkoreit et al. (2010), for example, translated all non-English webpages into English using an existing translation system and used near-duplicate detection methods to find candidate parallel document pairs. Ture and Lin (2012) had a similar approach for finding parallel Wikipedia documents by using near-duplicate detection, though they did not need to apply a full translation system to all non-English documents. 1381 Instead, they represented documents in bag-ofwords vector space, and projected non-English document vectors into the English vector space using the translation probabilities of a word alignment model. By comparison, one appeal of our simple approach is that it requires only a table of language codes. However, with this system in place, we could obtain enough parallel data to bootstrap these more sophisticated approaches. It is also compelling to consider ways in which web-mined data obtained from scratch could be used to bootstrap other mining approaches. For example, Smith et al. (2010) mine parallel sentences from comparable documents in Wikipedia, demonstrating substantial gains on open domain translation. However, their approach required seed parallel data to learn models used in a classifier. We imagine a two-step process, first obtaining parallel data from the web, followed by comparable data from sources such as Wikipedia using models bootstrapped from the web-mined data. Such a process could be used to build translation systems for new language pairs in a very short period of time, hence fulfilling one of the original promises of SMT. Acknowledgements Thanks to Ann Irvine, Jonathan Weese, and our anonymous reviewers from NAACL and ACL for comments on previous drafts. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement 288487 (MosesCore). This research was partially funded by the Johns Hopkins University Human Language Technology Center of Excellence, and by gifts from Google and Microsoft. References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F. Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT ’10, pages 17–53. Association for Computational Linguistics. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar F. Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT ’11, pages 22–64. Association for Computational Linguistics. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 10–51, Montr´eal, Canada, June. Association for Computational Linguistics. Jiang Chen and Jian-Yun Nie. 2000. Parallel web text mining for cross-language ir. In IN IN PROC. OF RIAO, pages 62–77. J. Dean and S. Ghemawat. 2004. Mapreduce: simplified data processing on large clusters. In Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation-Volume 6, pages 10–10. USENIX Association. William A. Gale and Kenneth W. Church. 1993. A program for aligning sentences in bilingual corpora. Comput. Linguist., 19:75–102, March. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180. Association for Computational Linguistics. P. Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5. Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics - Volume 1, ETMTNLP ’02, pages 63–70. Association for Computational Linguistics. Adam Lopez, Matt Post, and Chris Callison-Burch. 2013. Parallel speech, transcription, and translation: The Fisher and Callhome Spanish-English speech translation corpora. Technical Report 11, Johns Hopkins University Human Language Technology Center of Excellence. Marco Lui and Timothy Baldwin. 2012. langid.py: an off-the-shelf language identification tool. In Proceedings of the ACL 2012 System Demonstrations, ACL ’12, pages 25–30. Association for Computational Linguistics. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. 1382 Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving Machine Translation Performance by Exploiting Non-Parallel Corpora. Comput. Linguist., 31:477–504, December. Jian-Yun Nie, Michel Simard, Pierre Isabelle, and Richard Durand. 1999. Cross-language information retrieval based on parallel texts and automatic mining of parallel texts from the web. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’99, pages 74–81, New York, NY, USA. ACM. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In acl, pages 160– 167, Sapporo, Japan. P. Resnik and N. A Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349–380. Philip Resnik. 1999. Mining the web for bilingual text. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, ACL ’99, pages 527–534. Association for Computational Linguistics. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment. In NAACL 2010. J¨org Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria. Ferhan Ture and Jimmy Lin. 2012. Why not grab a free lunch? mining large corpora for parallel sentences to improve translation modeling. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 626–630, Montr´eal, Canada, June. Association for Computational Linguistics. Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 1101– 1109. Association for Computational Linguistics. Ashish Venugopal, Jakob Uszkoreit, David Talbot, Franz J. Och, and Juri Ganitkevitch. 2011. Watermarking the outputs of structured prediction with an application in statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 1363–1372. Association for Computational Linguistics. Omar F. Zaidan and Chris Callison-Burch. 2011. Crowdsourcing translation: Professional quality from non-professionals. In Proc. of ACL. 1383
2013
135
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1384–1394, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Sentence Compression Based Framework to Query-Focused Multi-Document Summarization Lu Wang1 Hema Raghavan2 Vittorio Castelli2 Radu Florian2 Claire Cardie1 1Department of Computer Science, Cornell University, Ithaca, NY 14853, USA {luwang, cardie}@cs.cornell.edu 2IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA {hraghav, vittorio, raduf}@us.ibm.com Abstract We consider the problem of using sentence compression techniques to facilitate queryfocused multi-document summarization. We present a sentence-compression-based framework for the task, and design a series of learning-based compression models built on parse trees. An innovative beam search decoder is proposed to efficiently find highly probable compressions. Under this framework, we show how to integrate various indicative metrics such as linguistic motivation and query relevance into the compression process by deriving a novel formulation of a compression scoring function. Our best model achieves statistically significant improvement over the state-of-the-art systems on several metrics (e.g. 8.0% and 5.4% improvements in ROUGE-2 respectively) for the DUC 2006 and 2007 summarization task. 1 Introduction The explosion of the Internet clearly warrants the development of techniques for organizing and presenting information to users in an effective way. Query-focused multi-document summarization (MDS) methods have been proposed as one such technique and have attracted significant attention in recent years. The goal of query-focused MDS is to synthesize a brief (often fixed-length) and well-organized summary from a set of topicrelated documents that answer a complex question or address a topic statement. The resulting summaries, in turn, can support a number of information analysis applications including openended question answering, recommender systems, and summarization of search engine results. As further evidence of its importance, the Document Understanding Conference (DUC) has used queryfocused MDS as its main task since 2004 to foster new research on automatic summarization in the context of users’ needs. To date, most top-performing systems for multi-document summarization—whether queryspecific or not—remain largely extractive: their summaries are comprised exclusively of sentences selected directly from the documents to be summarized (Erkan and Radev, 2004; Haghighi and Vanderwende, 2009; Celikyilmaz and Hakkani-T¨ur, 2011). Despite their simplicity, extractive approaches have some disadvantages. First, lengthy sentences that are partly relevant are either excluded from the summary or (if selected) can block the selection of other important sentences, due to summary length constraints. In addition, when people write summaries, they tend to abstract the content and seldom use entire sentences taken verbatim from the original documents. In news articles, for example, most sentences are lengthy and contain both potentially useful information for a summary as well as unnecessary details that are better omitted. Consider the following DUC query as input for a MDS system:1 “In what ways have stolen artworks been recovered? How often are suspects arrested or prosecuted for the thefts?” One manually generated summary includes the following sentence but removes the bracketed words in gray: A man suspected of stealing a million-dollar collection of [hundreds of ancient] Nepalese and Tibetan art objects in New York [11 years ago] was arrested [Thursday at his South Los Angeles home, where he had been hiding the antiquities, police said]. In this example, the compressed sentence is rela1From DUC 2005, query for topic d422g. 1384 tively more succinct and readable than the original (e.g. in terms of Flesch-Kincaid Reading Ease Score (Kincaid et al., 1975)). Likewise, removing information irrelevant to the query (e.g. “11 years ago”, “police said”) is crucial for query-focused MDS. Sentence compression techniques (Knight and Marcu, 2000; Clarke and Lapata, 2008) are the standard for producing a compact and grammatical version of a sentence while preserving relevance, and prior research (e.g. Lin (2003)) has demonstrated their potential usefulness for generic document summarization. Similarly, strides have been made to incorporate sentence compression into query-focused MDS systems (Zajic et al., 2006). Most attempts, however, fail to produce better results than those of the best systems built on pure extraction-based approaches that use no sentence compression. In this paper we investigate the role of sentence compression techniques for query-focused MDS. We extend existing work in the area first by investigating the role of learning-based sentence compression techniques. In addition, we design three types of approaches to sentence-compression— rule-based, sequence-based and tree-based—and examine them within our compression-based framework for query-specific MDS. Our topperforming sentence compression algorithm incorporates measures of query relevance, content importance, redundancy and language quality, among others. Our tree-based methods rely on a scoring function that allows for easy and flexible tailoring of sentence compression to the summarization task, ultimately resulting in significant improvements for MDS, while at the same time remaining competitive with existing methods in terms of sentence compression, as discussed next. We evaluate the summarization models on the standard Document Understanding Conference (DUC) 2006 and 2007 corpora 2 for queryfocused MDS and find that all of our compressionbased summarization models achieve statistically significantly better performance than the best DUC 2006 systems. Our best-performing system yields an 11.02 ROUGE-2 score (Lin and Hovy, 2003), a 8.0% improvement over the best reported score (10.2 (Davis et al., 2012)) on the 2We believe that we can easily adapt our system for tasks (e.g. TAC-08’s opinion summarization or TAC-09’s update summarization) or domains (e.g. web pages or wikipedia pages). We reserve that for future work. DUC 2006 dataset, and an 13.49 ROUGE-2, a 5.4% improvement over the best score in DUC 2007 (12.8 (Davis et al., 2012)). We also observe substantial improvements over previous systems w.r.t. the manual Pyramid (Nenkova and Passonneau, 2004) evaluation measure (26.4 vs. 22.9 (Jagarlamudi et al., 2006)); human annotators furthermore rate our system-generated summaries as having less redundancy and comparable quality w.r.t. other linguistic quality metrics. With these results we believe we are the first to successfully show that sentence compression can provide statistically significant improvements over pure extraction-based approaches for queryfocused MDS. 2 Related Work Existing research on query-focused multidocument summarization (MDS) largely relies on extractive approaches, where systems usually take as input a set of documents and select the top relevant sentences for inclusion in the final summary. A wide range of methods have been employed for this task. For unsupervised methods, sentence importance can be estimated by calculating topic signature words (Lin and Hovy, 2000; Conroy et al., 2006), combining query similarity and document centrality within a graph-based model (Otterbacher et al., 2005), or using a Bayesian model with sophisticated inference (Daum´e and Marcu, 2006). Davis et al. (2012) first learn the term weights by Latent Semantic Analysis, and then greedily select sentences that cover the maximum combined weights. Supervised approaches have mainly focused on applying discriminative learning for ranking sentences (Fuentes et al., 2007). Lin and Bilmes (2011) use a class of carefully designed submodular functions to reward the diversity of the summaries and select sentences greedily. Our work is more related to the less studied area of sentence compression as applied to (single) document summarization. Zajic et al. (2006) tackle the query-focused MDS problem using a compress-first strategy: they develop heuristics to generate multiple alternative compressions of all sentences in the original document; these then become the candidates for extraction. This approach, however, does not outperform some extractionbased approaches. A similar idea has been studied for MDS (Lin, 2003; Gillick and Favre, 2009), 1385 but limited improvement is observed over extractive baselines with simple compression rules. Finally, although learning-based compression methods are promising (Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011), it is unclear how well they handle issues of redundancy. Our research is also inspired by probabilistic sentence-compression approaches, such as the noisy-channel model (Knight and Marcu, 2000; Turner and Charniak, 2005), and its extension via synchronous context-free grammars (SCFG) (Aho and Ullman, 1969; Lewis and Stearns, 1968) for robust probability estimation (Galley and McKeown, 2007). Rather than attempt to derive a new parse tree like Knight and Marcu (2000) and Galley and McKeown (2007), we learn to safely remove a set of constituents in our parse tree-based compression model while preserving grammatical structure and essential content. Sentence-level compression has also been examined via a discriminative model McDonald (2006), and Clarke and Lapata (2008) also incorporate discourse information by using integer linear programming. 3 The Framework We now present our query-focused MDS framework consisting of three steps: Sentence Ranking, Sentence Compression and Post-processing. First, sentence ranking determines the importance of each sentence given the query. Then, a sentence compressor iteratively generates the most likely succinct versions of the ranked sentences, which are cumulatively added to the summary, until a length limit is reached. Finally, the postprocessing stage applies coreference resolution and sentence reordering to build the summary. Sentence Ranking. This stage aims to rank sentences in order of relevance to the query. Unsurprisingly, ranking algorithms have been successfully applied to this task. We experimented with two of them – Support Vector Regression (SVR) (Mozer et al., 1997) and LambdaMART (Burges et al., 2007). The former has been used previously for MDS (Ouyang et al., 2011). LambdaMart on the other hand has shown considerable success in information retrieval tasks (Burges, 2010); we are the first to apply it to summarization. For training, we use 40 topics (i.e. queries) from the DUC 2005 corpus (Dang, 2005) along with their manually generated abstracts. As in previous work (Shen and Li, Basic Features relative/absolute position is among the first 1/3/5 sentences? number of words (with/without stopwords) number of words more than 5/10 (with/without stopwords) Query-Relevant Features unigram/bigram/skip bigram (at most four words apart) overlap unigram/bigram TF/TF-IDF similarity mention overlap subject/object/indirect object overlap semantic role overlap relation overlap Query-Independent Features average/total unigram/bigram IDF/TF-IDF unigram/bigram TF/TF-IDF similarity with the centroid of the cluster average/sum of sumBasic/SumFocus (Toutanova et al., 2007) average/sum of mutual information average/sum of number of topic signature words (Lin and Hovy, 2000) basic/improved sentence scorers from Conroy et al. (2006) Content Features contains verb/web link/phone number? contains/portion of words between parentheses Table 1: Sentence-level features for sentence ranking. 2011; Ouyang et al., 2011), we use the ROUGE2 score, which measures bigram overlap between a sentence and the abstracts, as the objective for regression. While space limitations preclude a longer discussion of the full feature set (ref. Table 1), we describe next the query-relevant features used for sentence ranking as these are the most important for our summarization setting. The goal of this feature subset is to determine the similarity between the query and each candidate sentence. When computing similarity, we remove stopwords as well as the words “discuss, describe, specify, explain, identify, include, involve, note” that are adopted and extended from Conroy et al. (2006). Then we conduct simple query expansion based on the title of the topic and cross-document coreference resolution. Specifically, we first add the words from the topic title to the query. And for each mention in the query, we add other mentions within the set of documents that corefer with this mention. Finally, we compute two versions of the features—one based on the original query and another on the expanded one. We also derive the semantic role overlap and relation instance overlap between the query and each sentence. Crossdocument coreference resolution, semantic role labeling and relation extraction are accomplished via the methods described in Section 5. Sentence Compression. As the main focus of this paper, we propose three types of compression methods, described in detail in Section 4 below. Post-processing. Post-processing performs coreference resolution and sentence ordering. 1386 Basic Features Syntactic Tree Features first 1/3/5 tokens (toks)? POS tag last 1/3/5 toks? parent/grandparent label first letter/all letters capitalized? leftmost child of parent? is negation? second leftmost child of parent? is stopword? is headword? Dependency Tree Features in NP/VP/ADVP/ADJP chunk? dependency relation (dep rel) Semantic Features parent/grandparent dep rel is a predicate? is the root? semantic role label has a depth larger than 3/5? Rule-Based Features For each rule in Table 2 , we construct a corresponding feature to indicate whether the token is identified by the rule. Table 3: Token-level features for sequence-based compression. We replace each pronoun with its referent unless they appear in the same sentence. For sentence ordering, each compressed sentence is assigned to the most similar (tf-idf) query sentence. Then a Chronological Ordering algorithm (Barzilay et al., 2002) sorts the sentences for each query based first on the time stamp, and then the position in the source document. 4 Sentence Compression Sentence compression is typically formulated as the problem of removing secondary information from a sentence while maintaining its grammaticality and semantic structure (Knight and Marcu, 2000; McDonald, 2006; Galley and McKeown, 2007; Clarke and Lapata, 2008). We leave other rewrite operations, such as paraphrasing and reordering, for future work. Below we describe the sentence compression approaches developed in this research: RULE-BASED COMPRESSION, SEQUENCE-BASED COMPRESSION, and TREEBASED COMPRESSION. 4.1 Rule-based Compression Turner and Charniak (2005) have shown that applying hand-crafted rules for trimming sentences can improve both content and linguistic quality. Our rule-based approach extends existing work (Conroy et al., 2006; Toutanova et al., 2007) to create the linguistically-motivated compression rules of Table 2. To avoid ill-formed output, we disallow compressions of more than 10 words by each rule. 4.2 Sequence-based Compression As in McDonald (2006) and Clarke and Lapata (2008), our sequence-based compression model makes a binary “keep-or-delete” decision for each word in the sentence. In contrast, however, we Figure 1: Diagram of tree-based compression. The nodes to be dropped are grayed out. In this example, the root of the gray subtree (a “PP”) would be labeled REMOVE. Its siblings and parent are labeled RETAIN and PARTIAL, respectively. The trimmed tree is realized as “Malaria causes millions of deaths.” view compression as a sequential tagging problem and make use of linear-chain Conditional Random Fields (CRFs) (Lafferty et al., 2001) to select the most likely compression. We represent each sentence as a sequence of tokens, X = x0x1 . . . xn, and generate a sequence of labels, Y = y0y1 . . . yn, that encode which tokens are kept, using a BIO label format: {B-RETAIN denotes the beginning of a retained sequence, IRETAIN indicates tokens “inside” the retained sequence, O marks tokens to be removed}. The CRF model is built using the features shown in Table 3. “Dependency Tree Features” encode the grammatical relations in which each word is involved as a dependent. For the “Syntactic Tree”, “Dependency Tree” and “Rule-Based” features, we also include features for the two words that precede and the two that follow the current word. Detailed descriptions of the training data and experimental setup are in Section 5. During inference, we find the maximally likely sequence Y according to a CRF with parameter θ (Y = arg maxY ′ P(Y ′|X; θ)), while simultaneously enforcing the rules of Table 2 to reduce the hypothesis space and encourage grammatical compression. To do this, we encode these rules as features for each token, and whenever these feature functions fire, we restrict the possible label for that token to “O”. 4.3 Tree-based Compression Our tree-based compression methods are in line with syntax-driven approaches (Galley and McKeown, 2007), where operations are carried out on parse tree constituents. Unlike previous work (Knight and Marcu, 2000; Galley and McKeown, 2007), we do not produce a new parse tree, 1387 Rule Example Header [MOSCOW , October 19 ( Xinhua ) –] Russian federal troops Tuesday continued... Relative dates ...Centers for Disease Control confirmed [Tuesday] that there was... Intra-sentential attribution ...fueling the La Nina weather phenomenon, [the U.N. weather agency said]. Lead adverbials [Interestingly], while the Democrats tend to talk about... Noun appositives Wayne County Prosecutor [John O’Hara] wanted to send a message... Nonrestrictive relative clause Putin, [who was born on October 7, 1952 in Leningrad], was elected in the presidential election... Adverbial clausal modifiers [Starting in 1998], California will require 2 per cent of a manufacturer... (Lead sentence) [Given the short time], car makers see electric vehicles as... Within Parentheses ...to Christian home schoolers in the early 1990s [(www.homecomputermarket.com)]. Table 2: Linguistically-motivated rules for sentence compression. The grayed-out words in brackets are removed. but focus on learning to identify the proper set of constituents to be removed. In particular, when a node is dropped from the tree, all words it subsumes will be deleted from the sentence. Formally, given a parse tree T of the sentence to be compressed and a tree traversal algorithm, T can be presented as a list of ordered constituent nodes, T = t0t1 . . . tm. Our objective is to find a set of labels, L = l0l1 . . . lm, where li ∈{RETAIN, REMOVE, PARTIAL}. RETAIN (RET) and REMOVE (REM) denote whether the node ti is retained or removed. PARTIAL (PAR) means ti is partly removed, i.e. at least one child subtree of ti is dropped. Labels are identified, in order, according to the tree traversal algorithm. Every node label needs to be compatible with the labeling history: given a node ti, and a set of labels l0 . . . li−1 predicted for nodes t0 . . . ti−1, li =RET or li =REM is compatible with the history when all children of ti are labeled as RET or REM, respectively; li =PAR is compatible when ti has at least two descendents tj and tk (j < i and k < i), one of which is RETained and the other, REMoved. As such, the root of the gray subtree in Figure 1 is labeled as REM; its left siblings as RET; its parent as PAR. As the space of possible compressions is exponential in the number of leaves in the parse tree, instead of looking for the globally optimal solution, we use beam search to find a set of highly likely compressions and employ a language model trained on a large corpus for evaluation. A Beam Search Decoder. The beam search decoder (see Algorithm 1) takes as input the sentence’s parse tree T = t0t1 . . . tm, an ordering O for traversing T (e.g. postorder) as a sequence of nodes in T, the set L of possible node labels, a scoring function S for evaluating each sentence compression hypothesis, and a beam size N. Specifically, O is a permutation on the set {0, 1, . . . , m}—each element an index onto T. Following O, T is re-ordered as tO0tO1 . . . tOm, and the decoder considers each ordered constituent tOi in turn. In iteration i, all existing sentence compression hypotheses are expanded by one node, tOi, labeling it with all compatible labels. The new hypotheses (usually subsentences) are ranked by the scorer S and the top N are preserved to be extended in the next iteration. See Figure 2 for an example. Input : parse tree T, ordering O = O0O1 . . . Om, L ={RET, REM, PAR}, hypothesis scorer S, beam size N Output: N best compressions stack ←Φ (empty set); foreach node tOi in T = tO0 . . . tOm do if i == 0 (first node visited) then foreach label lO0 in L do newHypothesis h′ ←[lO0]; put h′ into Stack; end else newStack ←Φ (empty set); foreach hypothesis h in stack do foreach label lOi in L do if lOi is compatible then newHypothesis h′ ←h + [lOi]; put h′ into newStack; end end end stack ←newStack; end Apply S to sort hypotheses in stack in descending order; Keep the N best hypotheses in stack; end Algorithm 1: Beam search decoder. Our BASIC Tree-based Compression instantiates the beam search decoder with postorder traversal and a hypothesis scorer that takes a possible sentence compression— a sequence of nodes (e.g. tO0 . . . tOk) and their labels (e.g. lO0 . . . lOk)—and returns Pk j=1 log P(lOj|tOj) (denoted later as ScoreBasic). The probability is estimated by a Maximum Entropy classifier (Berger et al., 1388 Figure 2: Example of beam search decoding. For postorder traversal, the three nodes are visited in a bottom-up order. The associated compression hypotheses (boxed) are ranked based on the scores in parentheses. Beam scores for other nodes are omitted. Basic Features Syntactic Tree Features projection falls w/in first 1/3/5 toks?∗ constituent label projection falls w/in last 1/3/5 toks?∗ parent left/right sibling label subsumes first 1/3/5 toks?∗ grandparent left/right sibling label subsumes last 1/3/5 toks?∗ is leftmost child of parent? number of words larger than 5/10?∗ is second leftmost child of parent? is leaf node?∗ is head node of parent? is root of parsing tree?∗ label of its head node has word with first letter capitalized? has a depth greater than 3/5/10? has word with all letters capitalized? Dependency Tree Features has negation? dep rel of head node† has stopwords? dep rel of parent’s head node† Semantic Features dep rel of grandparent’s head node† the head node has predicate? contain root of dep tree?† semantic roles of head node has a depth larger than 3/5?† Rule-Based Features For each rule in Table 2 , we construct a corresponding feature to indicate whether the token is identified by the rule. Table 4: Constituent-level features for tree-based compression. ∗or † denote features that are concatenated with every Syntactic Tree feature to compose a new one. 1996) trained at the constituent level using the features in Table 4. We also apply the rules of Table 2 during the decoding process. Concretely, if the words subsumed by a node are identified by any rule, we only consider REM as the node’s label. Given the N-best compressions from the decoder, we evaluate the yield of the trimmed trees using a language model trained on the Gigaword (Graff, 2003) corpus and return the compression with the highest probability. Thus, the decoder is quite flexible — its learned scoring function allows us to incorporate features salient for sentence compression while its language model guarantees the linguistic quality of the compressed string. In the sections below we consider additional improvements. 4.3.1 Improving Beam Search CONTEXT-aware search is based on the intuition that predictions on preceding context can be leveraged to facilitate the prediction of the current node. For example, parent nodes with children that have all been removed (retained) should have a label of REM (RET). In light of this, we encode these contextual predictions as additional features of S, that is, ALL-CHILDRENREMOVED/RETAINED, ANY-LEFTSIBLINGREMOVED/RETAINED/PARTLY REMOVED, LABEL-OF-LEFT-SIBLING/HEAD-NODE. HEAD-driven search modifies the BASIC postorder tree traversal by visiting the head node first at each level, leaving other orders unchanged. In a nutshell, if the head node is dropped, then its modifiers need not be preserved. We adopt the same features as CONTEXT-aware search, but remove those involving left siblings. We also add one more feature: LABEL-OF-THE-HEAD-NODEIT-MODIFIES. 4.3.2 Task-Specific Sentence Compression The current scorer ScoreBasic is still fairly naive in that it focuses only on features of the sentence to be compressed. However extra-sentential knowledge can also be important for queryfocused MDS. For example, information regarding relevance to the query might lead the decoder to produce compressions better suited for the summary. Towards this goal, we construct a compression scoring function—the multi-scorer (MULTI)—that allows the incorporation of multiple task-specific scorers. Given a hypothesis at any stage of decoding, which yields a sequence of words W = w0w1...wj, we propose the following component scorers. Query Relevance. Query information ought to guide the compressor to identify the relevant content. The query Q is expanded as described in Section 3. Let |W ∩Q| denote the number of unique overlapping words between W and Q, then scoreq = |W ∩Q|/|W|. Importance. A query-independent importance score is defined as the average SumBasic (Toutanova et al., 2007) value in W, i.e. scoreim = Pj i=1 SumBasic(wi)/|W|. Language Model. We let scorelm be the probability of W computed by a language model. Cross-Sentence Redundancy. To encourage diversified content, we define a redundancy score to discount replicated content: scorered = 1 −|W ∩ C|/|W|, where C is the words already selected for the summary. 1389 The multi-scorer is defined as a linear combination of the component scorers: Let ⃗α = (α0, . . . , α4), 0 ≤αi ≤1, −−−→ score = (scoreBasic, scoreq, scoreim, scorelm, scorered), S = scoremulti = ⃗α · −−−→ score (1) The parameters ⃗α are tuned on a held-out tuning set by grid search. We linearly normalize the score of each metric, where the minimum and maximum values are estimated from the tuning data. 5 Experimental Setup We evaluate our methods on the DUC 2005, 2006 and 2007 datasets (Dang, 2005; Dang, 2006; Dang, 2007), each of which is a collection of newswire articles. 50 complex queries (topics) are provided for DUC 2005 and 2006, 35 are collected for DUC 2007 main task. Relevant documents for each query are provided along with 4 to 9 human MDS abstracts. The task is to generate a summary within 250 words to address the query. We split DUC 2005 into two parts: 40 topics to train the sentence ranking models, and 10 for ranking algorithm selection and parameter tuning for the multiscorer. DUC 2006 and DUC 2007 are reserved as held out test sets. Sentence Compression. The dataset from Clarke and Lapata (2008) is used to train the CRF and MaxEnt classifiers (Section 4). It includes 82 newswire articles with one manually produced compression aligned to each sentence. Preprocessing. Documents are processed by a full NLP pipeline, including token and sentence segmentation, parsing, semantic role labeling, and an information extraction pipeline consisting of mention detection, NP coreference, crossdocument resolution, and relation detection (Florian et al., 2004; Luo et al., 2004; Luo and Zitouni, 2005). Learning for Sentence Ranking and Compression. We use Weka (Hall et al., 2009) to train a support vector regressor and experiment with various rankers in RankLib (Dang, 2011)3. As LambdaMART has an edge over other rankers on the held-out dataset, we selected it to produce ranked sentences for further processing. For sequencebased compression using CRFs, we employ Mallet (McCallum, 2002) and integrate the Table 2 rules during inference. NLTK (Bird et al., 2009) 3Default parameters are used. If an algorithm needs a validation set, we use 10 out of 40 topics. MaxEnt classifiers are used for tree-based compression. Beam size is fixed at 2000.4 Sentence compressions are evaluated by a 5-gram language model trained on Gigaword (Graff, 2003) by SRILM (Stolcke, 2002). 6 Results The results in Table 5 use the official ROUGE software with standard options5 and report ROUGE2 (R-2) (measures bigram overlap) and ROUGESU4 (R-SU4) (measures unigram and skip-bigram separated by up to four words). We compare our sentence-compression-based methods to the best performing systems based on ROUGE in DUC 2006 and 2007 (Jagarlamudi et al., 2006; Pingali et al., 2007), system by Davis et al. (2012) that report the best R-2 score on DUC 2006 and 2007 thus far, and to the purely extractive methods of SVR and LambdaMART. Our sentence-compression-based systems (marked with †) show statistically significant improvements over pure extractive summarization for both R-2 and R-SU4 (paired t-test, p < 0.01). This means our systems can effectively remove redundancy within the summary through compression. Furthermore, our HEAD-driven beam search method with MULTI-scorer beats all systems on DUC 20066 and all systems on DUC 2007 except the best system in terms of R-2 (p < 0.01). Its R-SU4 score is also significantly (p < 0.01) better than extractive methods, rule-based and sequence-based compression methods on both DUC 2006 and 2007. Moreover, our systems with learning-based compression have considerable compression rates, indicating their capability to remove superfluous words as well as improve summary quality. Human Evaluation. The Pyramid (Nenkova and Passonneau, 2004) evaluation was developed to manually assess how many relevant facts or Summarization Content Units (SCUs) are captured by system summaries. We ask a professional annotator (who is not one of the authors, is highly experienced in annotating for various NLP tasks, and is fluent in English) to carry out a Pyramid evaluation on 10 randomly selected topics from 4We looked at various beam sizes on the heldout data, and observed that the performance peaks around this value. 5ROUGE-1.5.5.pl -n 4 -w 1.2 -m -2 4 -u -c 95 -r 1000 -f A -p 0.5 -t 0 -a -d 6The system output from Davis et al. (2012) is not available, so significance tests are not conducted on it. 1390 DUC 2006 DUC 2007 System C Rate R-2 R-SU4 C Rate R-2 R-SU4 Best DUC system – 9.56 15.53 – 12.62 17.90 Davis et al. (2012) – 10.2 15.2 – 12.8 17.5 SVR 100% 7.78 13.02 100% 9.53 14.69 LambdaMART 100% 9.84 14.63 100% 12.34 15.62 Rule-based 78.99% 10.62 ∗† 15.73 † 78.11% 13.18† 18.15† Sequence 76.34% 10.49 † 15.60 † 77.20% 13.25† 18.23† Tree (BASIC + ScoreBasic) 70.48% 10.49 † 15.86 † 69.27% 13.00† 18.29† Tree (CONTEXT + ScoreBasic) 65.21% 10.55 ∗† 16.10 † 63.44% 12.75 18.07† Tree (HEAD + ScoreBasic) 66.70% 10.66 ∗† 16.18 † 65.05% 12.93 18.15† Tree (HEAD + MULTI) 70.20% 11.02 ∗† 16.25 † 73.40% 13.49† 18.46† Table 5: Query-focused MDS performance comparison: C Rate or compression rate is the proportion of words preserved. R-2 (ROUGE-2) and R-SU4 (ROUGE-SU4) scores are multiplied by 100. “–” indicates that data is unavailable. BASIC, CONTEXT and HEAD represent the basic beam search decoder, context-aware and head-driven search extensions respectively. ScoreBasic and MULTI refer to the type of scorer used. Statistically significant improvements (p < 0.01) over the best system in DUC 06 and 07 are marked with ∗. † indicates statistical significance (p < 0.01) over extractive approaches (SVR or LambdaMART). HEAD + MULTI outperforms all the other extract- and compression-based systems in R-2. System Pyr Gra Non-Red Ref Foc Coh Best DUC system (ROUGE) 22.9±8.2 3.5±0.9 3.5±1.0 3.5±1.1 3.6±1.0 2.9±1.1 Best DUC system (LQ) – 4.0±0.8 4.2±0.7 3.8±0.7 3.6±0.9 3.4±0.9 Our System 26.4±10.3 3.0±0.9 4.0±1.1 3.6±1.0 3.4±0.9 2.8±1.0 Table 6: Human evaluation on our multi-scorer based system, Jagarlamudi et al. (2006) (Best DUC system (ROUGE)), and Lacatusu et al. (2006) (Best DUC system (LQ)). Our system can synthesize more relevant content according to Pyramid (×100). We also examine linguistic quality (LQ) in Grammaticality (Gra), Non-redundancy (Non-Red), Referential clarity (Ref), Focus (Foc), and Structure and Coherence (Coh) like Dang (2006), each rated from 1 (very poor) to 5 (very good). Our system has better non-redundancy than Jagarlamudi et al. (2006) and is comparable to Jagarlamudi et al. (2006) and Lacatusu et al. (2006) in other metrics except grammaticality. the DUC 2006 task with gold-standard SCU annotation in abstracts. The Pyramid score (see Table 6) is re-calculated for the system with best ROUGE scores in DUC 2006 (Jagarlamudi et al., 2006) along with our system by the same annotator to make a meaningful comparison. We further evaluate the linguistic quality (LQ) of the summaries for the same 10 topics in accordance with the measurement in Dang (2006). Four native speakers who are undergraduate students in computer science (none are authors) performed the task, We compare our system based on HEAD-driven beam search with MULTI-scorer to the best systems in DUC 2006 achieving top ROUGE scores (Jagarlamudi et al., 2006) (Best DUC system (ROUGE)) and top linguistic quality scores (Lacatusu et al., 2006) (Best DUC system (LQ))7. The average score and standard deviation for each metric is displayed in Table 6. Our system achieves a higher Pyramid score, an indication that it captures more of the salient facts. We also 7Lacatusu et al. (2006) obtain the best scores in three linguistic quality metrics (i.e. grammaticality, focus, structure and coherence), and overall responsiveness on DUC 2006. attain better non-redundancy than Jagarlamudi et al. (2006), meaning that human raters perceive less replicative content in our summaries. Scores for other metrics are comparable to Jagarlamudi et al. (2006) and Lacatusu et al. (2006), which either uses minimal non-learning-based compression rules or is a pure extractive system. However, our compression system sometimes generates less grammatical sentences, and those are mostly due to parsing errors. For example, parsing a clause starting with a past tense verb as an adverbial clausal modifier can lead to an ill-formed compression. Those issues can be addressed by analyzing k-best parse trees and we leave it in the future work. A sample summary from our multiscorer based system is in Figure 3. Sentence Compression Evaluation. We also evaluate sentence compression separately on (Clarke and Lapata, 2008), adopting the same partitions as (Martins and Smith, 2009), i.e. 1, 188 sentences for training and 441 for testing. Our compression models are compared with Hedge Trimmer (Dorr et al., 2003), a discriminative model proposed by McDonald (2006) and a 1391 System C Rate Uni-Prec Uni-Rec Uni-F1 Rel-F1 HedgeTrimmer 57.64% 0.72 0.65 0.64 0.50 McDonald (2006) 70.95% 0.77 0.78 0.77 0.55 Martins and Smith (2009) 71.35% 0.77 0.78 0.77 0.56 Rule-based 87.65% 0.74 0.91 0.80 0.63 Sequence 70.79% 0.77 0.80 0.76 0.58 Tree (BASIC) 69.65% 0.77 0.79 0.75 0.56 Tree (CONTEXT) 67.01% 0.79 0.78 0.76 0.57 Tree (HEAD) 68.06% 0.79 0.80 0.77 0.59 Table 7: Sentence compression comparison. The true c rate is 69.06% for the test set. Tree-based approaches all use single-scorer. Our context-aware and head-driven tree-based approaches outperform all the other systems significantly (p < 0.01) in precision (Uni-Prec) without sacrificing the recalls (i.e. there is no statistically significant difference between our models and McDonald (2006) / M & S (2009) with p > 0.05). Italicized numbers for unigram F1 (Uni-F1) are statistically indistinguishable (p > 0.05). Our head-driven tree-based approach also produces significantly better grammatical relations F1 scores (Rel-F1) than all the other systems except the rule-based method (p < 0.01). Topic D0626H: How were the bombings of the US embassies in Kenya and Tanzania conducted? What terrorist groups and individuals were responsible? How and where were the attacks planned? WASHINGTON, August 13 (Xinhua) – President Bill Clinton Thursday condemned terrorist bomb attacks at U.S. embassies in Kenya and Tanzania and vowed to find the bombers and bring them to justice. Clinton met with his top aides Wednesday in the White House to assess the situation following the twin bombings at U.S. embassies in Kenya and Tanzania, which have killed more than 250 people and injured over 5,000, most of them Kenyans and Tanzanians. Local sources said the plan to bomb U.S. embassies in Kenya and Tanzania took three months to complete and bombers destined for Kenya were dispatched through Somali and Rwanda. FBI Director Louis Freeh, Attorney General Janet Reno and other senior U.S. government officials will hold a news conference at 1 p.m. EDT (1700GMT) at FBI headquarters in Washington “to announce developments in the investigation of the bombings of the U.S. embassies in Kenya and Tanzania,” the FBI said in a statement. ... Figure 3: Part of the summary generated by the multiscorer based summarizer for topic D0626H (DUC 2006). Grayed out words are removed. Queryirrelevant phrases, such as temporal information or source of the news, have been removed. dependency-tree based compressor (Martins and Smith, 2009)8. We adopt the metrics in Martins and Smith (2009) to measure the unigram-level macro precision, recall, and F1-measure with respect to human annotated compression. In addition, we also compute the F1 scores of grammatical relations which are annotated by RASP (Briscoe and Carroll, 2002) according to Clarke and Lapata (2008). In Table 7, our context-aware and head-driven tree-based compression systems show statistically significantly (p < 0.01) higher precisions (Uni8Thanks to Andr´e F.T. Martins for system outputs. Prec) than all the other systems, without decreasing the recalls (Uni-Rec) significantly (p > 0.05) based on a paired t-test. Unigram F1 scores (UniF1) in italics indicate that the corresponding systems are not statistically distinguishable (p > 0.05). For grammatical relation evaluation, our head-driven tree-based system obtains statistically significantly (p < 0.01) better F1 score (Rel-F1 than all the other systems except the rule-based system). 7 Conclusion We have presented a framework for query-focused multi-document summarization based on sentence compression. We propose three types of compression approaches. Our tree-based compression method can easily incorporate measures of query relevance, content importance, redundancy and language quality into the compression process. By testing on a standard dataset using the automatic metric ROUGE, our models show substantial improvement over pure extraction-based methods and state-of-the-art systems. Our best system also yields better results for human evaluation based on Pyramid and achieves comparable linguistic quality scores. Acknowledgments This work was supported in part by National Science Foundation Grant IIS-0968450 and a gift from Boeing. We thank Ding-Jung Han, YoungSuk Lee, Xiaoqiang Luo, Sameer Maskey, Myle Ott, Salim Roukos, Yiye Ruan, Ming Tan, Todd Ward, Bowen Zhou, and the ACL reviewers for valuable suggestions and advice on various aspects of this work. 1392 References Alfred V. Aho and Jeffrey D. Ullman. 1969. Syntax directed translations and the pushdown assembler. J. Comput. Syst. Sci., 3(1):37–56. Regina Barzilay, Noemie Elhadad, and Kathleen R. McKeown. 2002. Inferring strategies for sentence ordering in multidocument news summarization. J. Artif. Int. Res., 17(1):35–55, August. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. ACL ’11, pages 481–490, Stroudsburg, PA, USA. Association for Computational Linguistics. Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Comput. Linguist., 22(1):39–71, March. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media. T. Briscoe and J. Carroll. 2002. Robust accurate statistical annotation of general text. Christopher J.C. Burges, Robert Ragno, and Quoc Viet Le. 2007. Learning to rank with nonsmooth cost functions. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 193– 200. MIT Press, Cambridge, MA. Christopher J. C. Burges. 2010. From RankNet to LambdaRank to LambdaMART: An overview. Technical report, Microsoft Research. Asli Celikyilmaz and Dilek Hakkani-T¨ur. 2011. Discovery of topically coherent sentences for extractive summarization. ACL ’11, pages 491–499, Stroudsburg, PA, USA. Association for Computational Linguistics. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression an integer linear programming approach. J. Artif. Int. Res., 31(1):399–429, March. John M. Conroy, Judith D. Schlesinger, Dianne P. O’Leary, and Jade Goldstein, 2006. Back to Basics: CLASSY 2006. U.S. National Inst. of Standards and Technology. Hoa T. Dang. 2005. Overview of DUC 2005. In Document Understanding Conference. Hoa Tran Dang. 2006. Overview of DUC 2006. In Proc. Document Understanding Workshop, page 10 pages. NIST. Hoa T. Dang. 2007. Overview of DUC 2007. In Document Understanding Conference. Van Dang. 2011. RankLib. Online. Hal Daum´e, III and Daniel Marcu. 2006. Bayesian query-focused summarization. ACL ’06, pages 305–312, Stroudsburg, PA, USA. Association for Computational Linguistics. Sashka T. Davis, John M. Conroy, and Judith D. Schlesinger. 2012. Occams - an optimal combinatorial covering algorithm for multi-document summarization. In ICDM Workshops, pages 454–463. Bonnie J Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: a parse-and-trim approach to headline generation. In Proceedings of the HLT-NAACL 03 on Text summarization workshop - Volume 5, HLT-NAACLDUC ’03, pages 1 – 8, Stroudsburg, PA, USA. Association for Computational Linguistics, Association for Computational Linguistics. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: graphbased lexical centrality as salience in text summarization. J. Artif. Int. Res., 22(1):457–479, December. Radu Florian, Hany Hassan, Abraham Ittycheriah, Hongyan Jing, Nanda Kambhatla, Xiaoqiang Luo, Nicolas Nicolov, and Salim Roukos. 2004. A statistical model for multilingual entity detection and tracking. In HLT-NAACL, pages 1–8. Maria Fuentes, Enrique Alfonseca, and Horacio Rodr´ıguez. 2007. Support vector machines for query-focused summarization trained and evaluated on pyramid data. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 57–60, Stroudsburg, PA, USA. Association for Computational Linguistics. Michel Galley and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. NAACL ’07, pages 180–187, Rochester, New York, April. Association for Computational Linguistics. Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP ’09, pages 10–18, Stroudsburg, PA, USA. Association for Computational Linguistics. David Graff. 2003. English Gigaword. Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. NAACL ’09, pages 362–370, Stroudsburg, PA, USA. Association for Computational Linguistics. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: an update. SIGKDD Explor. Newsl., 11(1):10–18, November. Jagadeesh Jagarlamudi, Prasad Pingali, and Vasudeva Varma, 2006. Query Independent Sentence Scoring approach to DUC 2006. J. Peter Kincaid, Robert P. Fishburne, Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Technical report, February. Kevin Knight and Daniel Marcu. 2000. Statistics-based summarization - step one: Sentence compression. AAAI ’00, pages 703–710. AAAI Press. Finley Lacatusu, Andrew Hickl, Kirk Roberts, Ying Shi, Jeremy Bensley, Bryan Rink, Patrick Wang, and Lara Taylor, 2006. LCCs gistexter at duc 2006: Multi-strategy multi-document summarization. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In 1393 Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. P. M. Lewis, II and R. E. Stearns. 1968. Syntax-directed transduction. J. ACM, 15(3):465–488, July. Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 510–520, Stroudsburg, PA, USA. Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of the 18th conference on Computational linguistics - Volume 1, COLING ’00, pages 495–501, Stroudsburg, PA, USA. Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, pages 71–78. Chin-Yew Lin. 2003. Improving summarization performance by sentence compression: a pilot study. In Proceedings of the sixth international workshop on Information retrieval with Asian languages - Volume 11, AsianIR ’03, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Xiaoqiang Luo and Imed Zitouni. 2005. Multi-lingual coreference resolution with syntactic features. In HLT/EMNLP. Xiaoqiang Luo, Abraham Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mentionsynchronous coreference resolution algorithm based on the bell tree. In ACL, pages 135–142. Andr´e F. T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP ’09, pages 1–9, Stroudsburg, PA, USA. Association for Computational Linguistics. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. Ryan McDonald. 2006. Discriminative Sentence Compression with Soft Syntactic Constraints. In Proceedings of the 11th˜EACL, Trento, Italy, April. Michael Mozer, Michael I. Jordan, and Thomas Petsche, editors. 1997. Advances in Neural Information Processing Systems 9, NIPS, Denver, CO, USA, December 2-5, 1996. MIT Press. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 145– 152, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Jahna Otterbacher, G¨unes¸ Erkan, and Dragomir R. Radev. 2005. Using random walks for question-focused sentence retrieval. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 915–922, Stroudsburg, PA, USA. Association for Computational Linguistics. You Ouyang, Wenjie Li, Sujian Li, and Qin Lu. 2011. Applying regression models to query-focused multidocument summarization. Inf. Process. Manage., 47(2):227–237, March. Prasad Pingali, Rahul K, and Vasudeva Varma, 2007. IIIT Hyderabad at DUC 2007. U.S. National Inst. of Standards and Technology. Chao Shen and Tao Li. 2011. Learning to rank for queryfocused multi-document summarization. In Diane J. Cook, Jian Pei, Wei Wang 0010, Osmar R. Zaane, and Xindong Wu, editors, ICDM, pages 626–634. IEEE. Andreas Stolcke. 2002. SRILM – an extensible language modeling toolkit. In Proceedings of ICSLP, volume 2, pages 901–904, Denver, USA. Kristina Toutanova, Chris Brockett, Michael Gamon, Jagadeesh Jagarlamudi, Hisami Suzuki, and Lucy Vanderwende. 2007. The PYTHY Summarization System: Microsoft Research at DUC 2007. In Proc. of DUC. Jenine Turner and Eugene Charniak. 2005. Supervised and unsupervised learning for sentence compression. ACL ’05, pages 290–297, Stroudsburg, PA, USA. Association for Computational Linguistics. David Zajic, Bonnie J Dorr, Jimmy Lin, and R. Schwartz. 2006. Sentence compression as a component of a multidocument summarization system. Proceedings of the 2006 Document Understanding Workshop, New York. 1394
2013
136
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1395–1405, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Domain-Independent Abstract Generation for Focused Meeting Summarization Lu Wang Department of Computer Science Cornell University Ithaca, NY 14853 [email protected] Claire Cardie Department of Computer Science Cornell University Ithaca, NY 14853 [email protected] Abstract We address the challenge of generating natural language abstractive summaries for spoken meetings in a domain-independent fashion. We apply Multiple-Sequence Alignment to induce abstract generation templates that can be used for different domains. An Overgenerateand-Rank strategy is utilized to produce and rank candidate abstracts. Experiments using in-domain and out-of-domain training on disparate corpora show that our system uniformly outperforms state-of-the-art supervised extract-based approaches. In addition, human judges rate our system summaries significantly higher than compared systems in fluency and overall quality. 1 Introduction Meetings are a common way to collaborate, share information and exchange opinions. Consequently, automatically generated meeting summaries could be of great value to people and businesses alike by providing quick access to the essential content of past meetings. Focused meeting summaries have been proposed as particularly useful; in contrast to summaries of a meeting as a whole, they refer to summaries of a specific aspect of a meeting, such as the DECISIONS reached, PROBLEMS discussed, PROGRESS made or ACTION ITEMS that emerged (Carenini et al., 2011). Our goal is to provide an automatic summarization system that can generate abstract-style focused meeting summaries to help users digest the vast amount of meeting content in an easy manner. Existing meeting summarization systems remain largely extractive: their summaries are comprised exclusively of patchworks of utterances selected directly from the meetings to be summarized (Riedhammer et al., 2010; Bui et al., 2009; Xie et al., 2008). Although relatively easy to construct, extractive approaches fall short of producing concise and readable summaries, largely due C: Looking at what we’ve got, we we want an LCD display with a spinning wheel. B: You have to have some push-buttons, don’t you? C: Just spinning and not scrolling, I would say. B: I think the spinning wheel is definitely very now. A: but since LCDs seems to be uh a definite yes, C: We’re having push-buttons on the outside C: and then on the inside an LCD with spinning wheel, Decision Abstract (Summary): The remote will have push buttons outside, and an LCD and spinning wheel inside. A: and um I’m not sure about the buttons being in the shape of fruit though. D: Maybe make it like fruity colours or something. C: The power button could be like a big apple or something. D: Um like I’m just thinking bright colours. Problem Abstract (Summary): How to incorporate a fruit and vegetable theme into the remote. Figure 1: Clips from the AMI meeting corpus (Mccowan et al., 2005). A, B, C and D refer to distinct speakers. Also shown is the gold-standard (manual) abstract (summary) for the decision and the problem. to the noisy, fragmented, ungrammatical and unstructured text of meeting transcripts (Murray et al., 2010b; Liu and Liu, 2009). In contrast, human-written meeting summaries are typically in the form of abstracts — distillations of the original conversation written in new language. A user study from Murray et al. (2010b) showed that people demonstrate a strong preference for abstractive summaries over extracts when the text to be summarized is conversational. Consider, for example, the two types of focused summary along with their associated dialogue snippets in Figure 1. We can see that extracts are likely to include unnecessary and noisy information from the meeting transcripts. On the contrary, the manually composed summaries (abstracts) are more compact and readable, and are written in a distinctly non-conversational style. 1395 To address the limitations of extract-based summaries, we propose a complete and fully automatic domain-independent abstract generation framework for focused meeting summarization. Following existing language generation research (Angeli et al., 2010; Konstas and Lapata, 2012), we first perform content selection: given the dialogue acts relevant to one element of the meeting (e.g. a single decision or problem), we train a classifier to identify summary-worthy phrases. Next, we develop an “overgenerate-and-rank” strategy (Walker et al., 2001; Heilman and Smith, 2010) for surface realization, which generates and ranks candidate sentences for the abstract. After redundancy reduction, the full meeting abstract can thus comprise the focused summary for each meeting element. As described in subsequent sections, the generation framework allows us to identify and reformulate the important information for the focused summary. Our contributions are as follows: • To the best of our knowledge, our system is the first fully automatic system to generate natural language abstracts for spoken meetings. • We present a novel template extraction algorithm, based on Multiple Sequence Alignment (MSA) (Durbin et al., 1998), to induce domain-independent templates that guide abstract generation. MSA is commonly used in bioinformatics to identify equivalent fragments of DNAs (Durbin et al., 1998) and has also been employed for learning paraphrases (Barzilay and Lee, 2003). • Although our framework requires labeled training data for each type of focused summary (decisions, problems, etc.), we also make initial tries for domain adaptation so that our summarization method does not need human-written abstracts for each new meeting domain (e.g. faculty meetings, theater group meetings, project group meetings). We instantiate the abstract generation framework on two corpora from disparate domains — the AMI Meeting Corpus (Mccowan et al., 2005) and ICSI Meeting Corpus (Janin et al., 2003) — and produce systems to generate focused summaries with regard to four types of meeting elements: DECISIONs, PROBLEMs, ACTION ITEMSs, and PROGRESS. Automatic evaluation (using ROUGE (Lin and Hovy, 2003) and BLEU (Papineni et al., 2002)) against manually generated focused summaries shows that our summarizers uniformly and statistically significantly outperform two baseline systems as well as a state-of-the-art supervised extraction-based system. Human evaluation also indicates that the abstractive summaries produced by our systems are more linguistically appealing than those of the utterance-level extraction-based system, preferring them over summaries from the extractionbased system of comparable semantic correctness (62.3% vs. 37.7%). Finally, we examine the generality of our model across domains for two types of focused summarization — decisions and problems — by training the summarizer on out-of-domain data (i.e. the AMI corpus for use on the ICSI meeting data, and vice versa). The resulting systems yield results comparable to those from the same system trained on in-domain data, and statistically significantly outperform supervised extractive summarization approaches trained on in-domain data. 2 Related Work Most research on spoken dialogue summarization attempts to generate summaries for full dialogues (Carenini et al., 2011). Only recently has the task of focused summarization been studied. Supervised methods are investigated to identify key phrases or utterances for inclusion in the decision summary (Fern´andez et al., 2008; Bui et al., 2009). Based on Fern´andez et al. (2008), a relation representation is proposed by Wang and Cardie (2012) to form structured summaries; we adopt this representation here for content selection. Our research is also in line with generating abstractive summaries for conversations. Extractive approaches (Murray et al., 2005; Xie et al., 2008; Galley, 2006) have been investigated extensively in conversation summarization. Murray et al. (2010a) present an abstraction system consisting of interpretation and transformation steps. Utterances are mapped to a simple conversation ontology in the interpretation step according to their type, such as a decision or problem. Then an integer linear programming approach is employed to select the utterances that cover more entities as 1396 Dialogue Acts: C: Looking at what we've got, we we want [an LCD display with a spinning wheel]. B: You have to have some push-buttons, don't you? C: Just spinning and not scrolling , I would say . B: I think the spinning wheel is definitely very now. A: but since LCDs seems to be uh a definite yes, C: We're having push-buttons [on the outside] C: and then on the inside an LCD with spinning wheel, Relation Instances: <want, an LCD display with a spinning wheel> <an LCD display, with a spinning wheel> <have, some push-buttons> <having, push-buttons on the outside> <push-buttons, on the outside> <an LCD, with spinning wheel> … (other possibilities) <want, an LCD display with a spinning wheel> • The team will want an LCD display with a spinning wheel. • The team with work with an LCD display with a spinning wheel. • The group decide to use an LCD display with a spinning wheel. … (other possibilities) <push-buttons, on the outside> • Push-buttons are going to be on the outside. • Push-buttons on the outside will be used. • There will be push-buttons on the outside. … (other possibilities) One-Best Abstract: The group decide to use an LCD display with a spinning wheel. One-Best Abstract: There will be pushbuttons on the outside. Final Summary: The group decide to use an LCD display with a spinning wheel. There will be pushbuttons on the outside. Learned Templates … (all possible abstracts per relation instance) Relation Extraction Content Selection Template Filling Statistical Ranking Surface Realization … (one-best abstract per relation instance) PostSelection Figure 2: The abstract generation framework. It takes as input a cluster of meeting-item-specific dialogue acts, from which one focused summary is constructed. Sample relation instances are denoted in bold (The indicators are further italicized and the arguments are in [brackets]). Summary-worthy relation instances are identified by content selection module (see Section 4) and then filled into the learned templates individually. A statistical ranker subsequently selects one best abstract per relation instance (see Section 5.2). The post-selection component reduces the redundancy and outputs the final summary (see Section 5.3). determined by an external ontology. Liu and Liu (2009) apply sentence compression on extracted summary utterances. Though some of the unnecessary words are dropped, the resulting compressions can still be ungrammatical and unstructured. This work is also broadly related to expert system-based language generation (Reiter and Dale, 2000) and concept-to-text generation tasks (Angeli et al., 2010; Konstas and Lapata, 2012), where the generation process is decomposed into content selection (or text planning) and surface realization. For instance, Angeli et al. (2010) learn from structured database records and parallel textual descriptions. They generate texts based on a series of decisions made to select the records, fields, and proper templates for rendering. Those techniques that are tailored to specific domains (e.g. weather forecasts or sportcastings) cannot be directly applied to the conversational data, as their input is well-structured and the templates learned are domain-specific. 3 Framework Our domain-independent abstract generation framework produces a summarizer that generates a grammatical abstract from a cluster of meeting-element-related dialogue acts (DAs) — all utterances associated with a single decision, problem, action item or progress step of interest. Note that identifying these DA clusters is a difficult task in itself (Bui et al., 2009). Accordingly, our experiments evaluate two conditions — one in which we assume that they are perfectly identified, and one in which we identify the clusters automatically. The summarizer consists of two major components and is depicted in Figure 2. Given the DA cluster to be summarized, the Content Selection module identifies a set of summary-worthy relation instances represented as indicator-argument pairs (i.e. these constitute a finer-grained representation than DAs). The Surface Realization component then generates a short summary in three steps. In the first step, each relation instance is filled into templates with disparate structures that are learned automatically from the training set (Template Filling). A statistical ranker then selects one best abstract per relation instance (Statistical Ranking). Finally, selected abstracts are processed for redundancy removal in Post-Selection. Detailed descriptions for each individual step are provided in Sections 4 and 5. 4 Content Selection Phrase-based content selection approaches have been shown to support better meeting summaries (Fern´andez et al., 2008). Therefore, we chose a content selection representation of a finer granularity than an utterance: we identify relation instances that can both effectively detect the crucial content and incorporate enough syntactic information to facilitate the downstream surface realization. More specifically, our relation instances are based on information extraction methods that identify a lexical indicator (or trigger) that evokes a relation of interest and then employ syntactic information, often in conjunction with semantic constraints, to find the argument constituent(or target phrase) to be extracted. Rela1397 tion instances, then, are represented by indicatorargument pairs (Chen et al., 2011). For example, in the DA cluster of Figure 2, ⟨want, an LCD display with a spinning wheel⟩and ⟨push-buttons, on the outside⟩are two relation instances. Relation Instance Extraction We adopt and extend the syntactic constraints from Wang and Cardie (2012) to identify all relation instances in the input utterances; the summary-worthy ones will be selected by a discriminative classifier. Constituent and dependency parses are obtained by the Stanford parser (Klein and Manning, 2003). Both the indicator and argument take the form of constituents in the parse tree. We restrict the eligible indicator to be a noun or verb; the eligible arguments is a noun phrase (NP), prepositional phrase (PP) or adjectival phrase (ADJP). A valid indicator-argument pair should have at least one content word and satisfy one of the following constraints: • When the indicator is a noun, the argument has to be a modifier or complement of the indicator. • When the indicator is a verb, the argument has to be the subject or the object if it is an NP, or a modifier or complement of the indicator if it is a PP/ADJP. We view relation extraction as a binary classification problem rather than a clustering task (Chen et al., 2011). All relation instances can be categorized as summary-worthy or not, but only the summary-worthy ones are used for abstract generation. A discriminative classifier is trained for this purpose based on Support Vector Machines (SVMs) (Joachims, 1998) with an RBF kernel. For training data construction, we consider a relation instance to be a positive example if it shares any content word with its corresponding abstracts, and a negative example otherwise. The features used are shown in Table 1. 5 Surface Realization In this section, we describe surface realization, which renders the relation instances into natural language abstracts. This process begins with template extraction (Section 5.1). Once the templates are learned, the relation instances from Section 4 are filled into the templates to generate an abstract (see Section 5.2). Redundancy handling is discussed in Section 5.3. Basic Features number of words/content words portion of content words/stopwords number of content words in indicator/argument number of content words that are also in previous DA indicator/argument only contains stopword? number of new nouns Content Features has capitalized word? has proper noun? TF/IDF/TFIDF min/max/average Discourse Features main speaker or not? is in an adjacency pair (AP)? is in the source/target of the AP? number of source/target DA in the AP is the target of the AP a positive/negative/neutral response? is the source of the AP a question? Syntax Features indicator/argument constituent tag dependency relation of indicator and argument Table 1: Features for content selection. Most are adapted from previous work (Galley, 2006; Xie et al., 2008; Wang and Cardie, 2012). Every basic or content feature is concatenated with the constituent tags of indicator and argument to compose a new one. Main speakers include the most talkative speaker (who has said the most words) and other speakers whose word count is more than 20% of the most talkative one (Xie et al., 2008). Adjacency pair (AP) (Galley, 2006) is an important conversational analysis concept; each AP consists of a source utterance and a target utterance produced by different speakers. 5.1 Template Extraction Sentence Clustering. Template extraction starts with clustering the sentences that constitute the manually generated abstracts in the training data according to their lexical and structural similarity. From each cluster, multiple-sequence alignment techniques are employed to capture the recurring patterns. Intuitively, desirable templates are those that can be applied in different domains to generate the same type of focused summary (e.g. decision or problem summaries). We do not want sentences to be clustered only because they describe the same domain-specific details (e.g. they are all about “data collection”), which will lead to fragmented templates that are not reusable for new domains. We therefore replace all appearances of dates, numbers, and proper names with generic labels. We also replace words that appear in both the abstract and supporting dialogue acts by a label indicating its phrase type. For any noun phrase with its head word abstracted, the whole phrase is also replaced with “NP”. 1398 start They The group were not sure whether to VP NP use NP should include end how much would cost to make 1) The group were not sure whether to [include]VP [a recharger for the remote]NP . 2) The group were not sure whether to use [plastic and rubber or titanium for the case]NP . 3) The group were not sure whether [the remote control]NP should include [functions for controlling video]NP . 4) They were not sure how much [a recharger]NP would cost to make . … (Other abstracts) 1) The group were not sure whether to VP NP . 2) The group were not sure whether to use NP . 3) The group were not sure whether NP should include NP . 4) They were not sure how much NP would cost to make . Generic Label Replacement + Clustering Template Examples: Fine T1: The group were not sure whether to SLOTVP NP . (1, 2) Fine T2: The group were not sure whether NP SLOTVP SLOTVP NP . (3) Fine T3: SLOTNP were not sure SLOTWHADJP SLOTWHADJP NP SLOTVP SLOTVP SLOTVP SLOTVP SLOTVP . (4) Coarse T1: SLOTNP SLOTNP were not sure SLOTSBAR SLOTVP SLOTVP SLOTNP . (1, 2) Coarse T2: SLOTNP SLOTNP were not sure SLOTSBAR SLOTNP SLOTVP SLOTVP SLOTNP . (3) Coarse T3: SLOTNP were not sure SLOTWHADJP SLOTWHADJP SLOTNP SLOTVP SLOTVP SLOTVP SLOTVP . (4) Template Induction MSA Figure 3: Example of template extraction by MultipleSequence Alignment for problem abstracts from AMI. Backbone nodes shared by at least 50% sentences are shaded. The grammatical errors exist in the original abstracts. Following Barzilay and Lee (2003), we approach the sentence clustering task by hierarchical complete-link clustering with a similarity metric based on word n-gram overlap (n = 1, 2, 3). Clusters with fewer than three abstracts are removed1. Learning the Templates via MSA. For learning the structural patterns among the abstracts, Multiple-Sequence Alignment (MSA) is first computed for each cluster. MSA takes as input multiple sentences and one scoring function to measure the similarity between any two words. For insertions or deletions, a gap cost is also added. MSA can thus find the best way to align the sequences with insertions or deletions in accordance with the scorer. However, computing an optimal MSA is NP-complete (Wang and Jiang, 1994), thus we implement an approximate algorithm (Needleman and Wunsch, 1970) that iteratively aligns two sequences each time and treats the resulting alignment as a new sequence2. Figure 3 demonstrates an MSA computed from a sample cluster of ab1Clustering stops when the similarity between any pairwise clusters is below 5. This is applied to every type of summarization. We tune the parameter on a small held-out development set by manually evaluating the induced templates. No significant change is observed within a small range. 2We adopt the scoring function for MSA from Barzilay and Lee (2003), where aligning two identical words scores 1, inserting a gap scores −0.01, and aligning two different words scores −0.5. stracts. The MSA is represented in the form of word lattice, from which we can detect the structural similarities shared by the sentences. To transform the resulting MSAs into templates, we need to decide whether a word in the sentence should be retained to comprise the template or abstracted. The backbone nodes in an MSA are identified as the ones shared by more than 50%3 of the cluster’s sentences (shaded in gray in Figure 3). We then create a FINE template for each sentence by abstracting the non-backbone words, i.e. replacing each of those words with a generic token (last step in Figure 3). We also create a COARSE template that only preserves the nodes shared by all of the cluster’s sentences. By using the operations above, domain-independent patterns are thus identified and domain-specific details are removed. Note that we do not explicitly evaluate the quality of the learned templates, which would require a significant amount of manual evaluation. Instead, they are evaluated extrinsically. We encode the templates as features (Angeli et al., 2010) that could be selected or ignored in the succeeding abstract ranking model. 5.2 Template Filling An Overgenerate-and-Rank Approach. Since filling the relation instances into templates of distinct structures may result in abstracts of varying quality, we rank the abstracts based on the features of the template, the transformation conducted, and the generated abstract. This is realized by the Overgenerate-and-Rank strategy (Walker et al., 2001; Heilman and Smith, 2010). It takes as input a set of relation instances (from the same cluster) R = {⟨indi, argi⟩}N i=1 that are produced by content selection component, a set of templates T = {tj}M j=1 that are represented as parsing trees, a transformation function F (described below), and a statistical ranker S for ranking the generated abstracts, for which we defer description later in this Section. For each ⟨indi, argi⟩, the overgenerate-andrank approach fills it into each template in T by applying F to generate all possible abstracts. Then the ranker S selects the best abstract absi. Postselection is conducted on the abstracts {absi}N i=1 to form the final summary. 3See Barzilay and Lee (2003) for a detailed discussion about the choice of 50% according to pigeonhole principle. 1399 The transformation function F models the constituent-level transformations of relation instances and their mappings to the parse trees of templates. With the intuition that people will reuse the relation instances from the transcripts albeit not necessarily in their original form to write the abstracts, we consider three major types of mapping operations for the indicator or argument in the source pair, namely, Full-Constituent Mapping, Sub-Constituent Mapping, and Removal. FullConstituent Mapping denotes that a source constituent is mapped directly to a target constituent of the template parse tree with the same tag. SubConstituent Mapping encodes more complex and flexible transformations in that a sub-constituent of the source is mapped to a target constituent with the same tag. This operation applies when the source has a tag of PP or ADJP, in which case its sub-constituent, if any, with a tag of NP, VP or ADJP can be mapped to the target constituent with the same tag. For instance, an argument “with a spinning wheel” (PP) can be mapped to an NP in a template because it has a sub-constituent “a spinning wheel” (NP). Removal means a source is not mapped to any constituent in the template. Formally, F is defined as: F(⟨indsrc, argsrc⟩, t) = {⟨indtran k , argtran k , indtar k , argtar k ⟩}K k=1 where ⟨indsrc, argsrc⟩∈R is a relation instance (source pair); t ∈T is a template; indtran k and argtran k is the transformed pair of indsrc and argsrc; indtar k and argtar k are constituents in t, and they compose one target pair for ⟨indsrc, argsrc⟩. We require that indsrc and argsrc are not removed at the same time. Moreover, for valid indtar k and argtar k , the words subsumed by them should be all abstracted in the template, and they do not overlap in the parse tree. To obtain the realized abstract, we traverse the parse tree of the filled template in pre-order. The words subsumed by the leaf nodes are thus collected sequentially. Learning a Statistical Ranker. We utilize a discriminative ranker based on Support Vector Regression (SVR) (Smola and Sch¨olkopf, 2004) to rank the generated abstracts. Given the training data that includes clusters of gold-standard summary-worthy relation instances, associated abstracts they support, and the parallel templates for each abstract, training samples for the ranker are Basic Features number of words in indsrc/argsrc number of new nouns in indsrc/argsrc indtran k /argtran k only has stopword? number of new nouns in indtran k /argtran k Structure Features constituent tag of indsrc/argsrc constituent tag of indsrc with constituent tag of indtar constituent tag of argsrc with constituent tag of argtar transformation of indsrc/argsrc combined with constituent tag dependency relation of indsrc and argsrc dependency relation of indtar and argtar above 2 features have same value? Template Features template type (fine/coarse) realized template (e.g. “the group decided to”) number of words in template the template has verb? Realization Features realization has verb? realization starts with verb? realization has adjacent verbs/NPs? indsrc precedes/succeeds argsrc? indtar precedes/succeeds argtar? above 2 features have same value? Language Model Features log pLM(first word in indtran k |previous 1/2 words) log pLM(realization) log pLM(first word in argtran k |previous 1/2 words) log pLM(realization)/length log pLM(next word | last 1/2 words in indtran k ) log pLM(next word | last 1/2 words in argtran k ) Table 2: Features for abstracts ranking. The language model features are based on a 5-gram language model trained on Gigaword (Graff, 2003) by SRILM (Stolcke, 2002). constructed according to the transformation function F mentioned above. Each sample is represented as: (⟨indsrc, argsrc⟩, ⟨indtran k , argtran k , indtar k , argtar k ⟩, t, a) where ⟨indsrc, argsrc⟩is the source pair, ⟨indtran k , argtran k ⟩ is the transformed pair, ⟨indtar k , argtar k ⟩is the target pair in template t, and a is the abstract parallel to t. We first find ⟨indtar,abs k , argtar,abs k ⟩, which is the corresponding constituent pair of ⟨indtar k , argtar k ⟩ in a. Then we identify the summary-worthy words subsumed by ⟨indtran k , argtran k ⟩that also appear in a. If those words are all subsumed by ⟨indtar,abs k , argtar,abs k ⟩, then it is considered to be a positive sample, and a negative sample otherwise. Table 2 displays the features used in abstract ranking. 5.3 Post-Selection: Redundancy Handling. Post-selection aims to maximize the information coverage and minimize the redundancy of the summary. Given the generated abstracts A = 1400 Input : relation instances R = {⟨indi, argi⟩}N i=1, generated abstracts A = {absi}N i=1, objective function f , cost function C Output: final abstract G G ←Φ (empty set); U ←A; while U ̸= Φ do abs ←arg maxabsi∈U f(A,G∪absi)−f(A,G) C(absi) ; if f(A, G ∪abs) −f(A, G) ≥0 then G ←G ∪abs; end U ←U \ abs; end Algorithm 1: Greedy algorithm for postselection to generate the final summary. {absi}N i=1, we use a greedy algorithm (Lin and Bilmes, 2010) to select a subset A′, where A′ ⊆A, to form the final summary. We define wij as the unigram similarity between abstracts absi and absj, C(absi) as the number of words in absi. We employ the following objective function: f(A, G) = P absi∈A\G P absj∈G wi,j, G ⊆A Algorithm 1 sequentially finds an abstract with the greatest ratio of objective function gain to length, and add it to the summary if the gain is non-negative. 6 Experimental Setup Corpora. Two disparate corpora are used for evaluation. The AMI meeting corpus (Mccowan et al., 2005) contains 139 scenario-driven meetings, where groups of four people participate in a series of four meetings for a fictitious project of designing remote control. The ICSI meeting corpus (Janin et al., 2003) consists of 75 naturally occurring meetings, each of them has 4 to 10 participants. Compared to the fabricated topics in AMI, the conversations in ICSI tend to be specialized and technical, e.g. discussion about speech and language technology. We use 57 meetings in ICSI and 139 meetings in AMI that include a short (usually one-sentence), manually constructed abstract summarizing each important output for every meeting. Decision and problem summaries are annotated for both corpora. AMI has extra action item summaries, and ICSI has progress summaries. The set of dialogue acts that support each abstract are annotated as such. System Inputs. We consider two system input settings. In the True Clusterings setting, we use the annotations to create perfect partitions of the DAs for input to the system; in the System Figure 4: Content selection evaluation by using ROUGE-SU4 (multiplied by 100). SVM-DA and SVM-TOKEN denotes for supervised extract-based methods with SVMs on utterance- and token-level. Summaries for decision, problem, action item, and progress are generated and evaluated for AMI and ICSI (with names in parentheses). X-axis shows the number of meetings used for training. Clusterings setting, we employ a hierarchical agglomerative clustering algorithm used for this task in (Wang and Cardie, 2011). DAs are grouped according to a classifier trained beforehand. Baselines and Comparisons. We compare our system with (1) two unsupervised baselines, (2) two supervised extractive approaches, and (3) an oracle derived from the gold standard abstracts. Baselines. As in Riedhammer et al. (2010), the LONGEST DA in each cluster is selected as the summary. The second baseline picks the cluster prototype (i.e. the DA with the largest TFIDF similarity with the cluster centroid) as the summary according to Wang and Cardie (2011). Although it is possible that important content is spread over multiple DAs, both baselines allow us to determine summary quality when summaries are restricted to a single utterance. Supervised Learning. We also compare our approach to two supervised extractive summarization methods — Support Vector Machines (Joachims, 1998) trained with the same fea1401 tures as our system (see Table 1) to identify the important DAs (no syntax features) (Xie et al., 2008; Sandu et al., 2010) or tokens (Fern´andez et al., 2008) to include into the summary4. Oracle. We compute an oracle consisting of the words from the DA cluster that also appear in the associated abstract to reflect the gap between the best possible extracts and the human abstracts. 7 Results Content Selection Evaluation. We first employ ROUGE (Lin and Hovy, 2003) to evaluate the content selection component with respect to the human written abstracts. ROUGE computes the ngram overlapping between the system summaries with the reference summaries, and has been used for both text and speech summarization (Dang, 2005; Xie et al., 2008). We report ROUGE-2 (R2) and ROUGE-SU4 (R-SU4) that are shown to correlate with human evaluation reasonably well. In AMI, four meetings of different functions are carried out in each group5. 35 meetings for “conceptual design” are randomly selected for testing. For ICSI, we reserve 12 meetings for testing. The R-SU4 scores for each system are displayed in Figure 4 and show that our system uniformly outperforms the baselines and supervised systems. The learning curve of our system is relatively flat, which means not many training meetings are required to reach a usable performance level. Note that the ROUGE scores are relative low when the reference summaries are human abstracts, even for evaluation among abstracts produced by different annotators (Dang, 2005). The intrinsic difference of styles between dialogue and human abstract further lowers the scores. But the trend is still respected among the systems. Abstract Generation Evaluation. To evaluate the full abstract generation system, the BLEU score (Papineni et al., 2002) (the precision of unigrams and bigrams with a brevity penalty) is computed with human abstracts as reference. BLEU has a fairly good agreement with human judgement and has been used to evaluate a variety of language generation systems (Angeli et al., 2010; Konstas and Lapata, 2012). 4We use SVMlight (Joachims, 1999) with RBF kernel by default parameters for SVM-based classifiers and regressor. 5The four types of meetings in AMI are: project kick-off (35 meetings), functional design (35 meetings), conceptual design (35 meetings), and detailed design (34 meetings). Figure 5: Full abstract generation system evaluation by using BLEU (multiplied by 100). SVM-DA denotes for supervised extractive methods with SVMs on utterance-level. We are not aware of any existing work generating abstractive summaries for conversations. Therefore, we compare our full system against a supervised utterance-level extractive method based on SVMs along with the baselines. The BLEU scores in Figure 5 show that our system improves the scores consistently over the baselines and the SVM-based approach. Domain Adaptation Evaluation. We further examine our system in domain adaptation scenarios for decision and problem summarization, where we train the system on AMI for use on ICSI, and vice versa. Table 3 indicates that, with both true clusterings and system clusterings, our system trained on out-of-domain data achieves comparable performance with the same system trained on in-domain data. In most experiments, it also significantly outperforms the baselines and the extract-based approaches (p < 0.05). Human Evaluation. We randomly select 15 decision and 15 problem DA clusters (true clusterings). We evaluate fluency (is the text grammatical?) and semantic correctness (does the summary convey the gist of the DAs in the cluster?) for OUR SYSTEM trained on IN-domain data 1402 System (True Clusterings) AMI Decision ICSI Decision AMI Problem ICSI Problem R-2 R-SU4 BLEU R-2 R-SU4 BLEU R-2 R-SU4 BLEU R-2 R-SU4 BLEU CENTROID DA 1.3 3.0 7.7 1.8 3.5 3.8 1.0 2.7 4.2 1.0 2.3 2.8 LONGEST DA 1.6 3.3 7.0 2.8 4.7 6.5 1.0 3.0 3.6 1.2 3.4 4.6 SVM-DA (IN) 3.4 4.7 9.7 3.4 4.5 5.7 1.4 2.4 5.0 1.6 3.4 3.4 SVM-DA (OUT) 2.7 4.2 6.6 3.1 4.2 4.6 1.4 2.2 2.5 1.3 3.0 4.6 OUR SYSTEM (IN) 4.5 6.2 11.6 4.9 7.1 10.0 3.1 4.8 7.2 4.0 5.9 6.0 OUR SYSTEM (OUT) 4.6 6.1 10.3 4.8 6.4 7.8 3.5 4.7 6.2 3.0 5.5 5.3 ORACLE 7.5 12.0 22.8 9.9 14.9 20.2 6.6 11.3 18.9 6.4 12.6 13.0 System (System Clusterings) AMI Decision ICSI Decision AMI Problem ICSI Problem R-2 R-SU4 BLEU R-2 R-SU4 BLEU R-2 R-SU4 BLEU R-2 R-SU4 BLEU CENTROID DA 1.4 3.3 3.8 1.4 2.1 2.0 0.8 2.8 2.9 0.9 2.3 1.8 LONGEST DA 1.4 3.3 5.7 1.7 3.4 5.5 0.8 3.2 4.1 0.9 3.4 4.4 SVM-DA (IN) 2.6 4.6 10.5 3.5 6.5 7.1 1.8 3.7 4.9 1.8 4.0 4.6 SVM-DA (OUT) 3.4 5.8 10.3 2.7 4.8 6.3 2.1 3.8 4.3 1.5 3.8 3.5 OUR SYSTEM (IN) 3.5 5.4 11.7 4.4 7.4 9.1 3.3 4.6 9.5 2.3 4.2 7.4 OUR SYSTEM (OUT) 3.9 6.4 11.4 4.1 5.1 8.4 3.6 5.6 8.9 1.8 4.0 6.8 ORACLE 6.4 12.0 15.1 8.2 15.2 17.6 6.5 13.0 20.9 5.5 11.9 15.5 Table 3: Domain adaptation evaluation. Systems trained on out-of-domain data are denoted with “(OUT)”, otherwise with “(IN)”. ROUGE and BLEU scores are multiplied by 100. Our systems that statistically significantly outperform all the other approaches (except ORACLE) are in bold (p < 0.05, paired t-test). The numbers in italics show the significant improvement over the baselines by our systems. System Fluency Semantic Length Mean S.D. Mean S.D. OUR SYSTEM (IN) 3.67 0.85 3.27 1.03 23.65 OUR SYSTEM (OUT) 3.58 0.90 3.25 1.16 24.17 SVM-DA (IN) 3.36 0.84 3.44 1.26 38.83 Table 4: Human evaluation results of Fluency and Semantic correctness for the generated abstracts. The ratings are on 1 (worst) to 5 (best) scale. The average Length of the abstracts for each system is also listed. and OUT-of-domain data, and for the utterancelevel extraction system (SVM-DA) trained on indomain data. Each cluster of DAs along with three randomly ordered summaries are presented to the judges. Five native speaking Ph.D. students (none are authors) performed the task. We carry out an one-way Analysis of Variance which shows significant differences in score as a function of system (p < 0.05, paired t-test). Results in Table 4 demonstrate that our system summaries are significantly more compact and fluent than the extract-based method (p < 0.05) while semantic correctness is comparable. The judges also rank the three summaries in terms of the overall quality in content, conciseness and grammaticality. An inter-rater agreement of Fleiss’s κ = 0.45 (moderate agreement (Landis and Koch, 1977)) was computed. Judges selected our system as the best system in 62.3% scenarios (IN-DOMAIN: 35.6%, OUT-OF-DOMAIN: 26.7%). Sample summaries are exhibited in Figure 6. 8 Conclusion We presented a domain-independent abstract generation framework for focused meeting summarization. Experimental results on two disparate meeting corpora show that our system can uniDecision Summary: Human: The remote will have push buttons outside, and an LCD and spinning wheel inside. Our System (In): The group decide to use an LCD display with a spinning wheel. There will be push-buttons on the outside. Our System (Out): LCD display is going to be with a spinning wheel. It is necessary having push-buttons on the outside. SVM-DA: Looking at what we’ve got, we we want an LCD display with a spinning wheel. Just spinning and not scrolling, I would say. I think the spinning wheel is definitely very now. We’re having push-buttons on the outside Problem Summary: Human: How to incorporate a fruit and vegetable theme into the remote. Our System (In): Whether to include the shape of fruit. The team had to thinking bright colors. Our System (Out): It is unclear that the buttons being in the shape of fruit. SVM-DA: and um Im not sure about the buttons being in the shape of fruit though. Figure 6: Sample decision and problem summaries generated by various systems for examples in Figure 1. formly outperform the state-of-the-art supervised extraction-based systems in both automatic and manual evaluation. Our system also exhibits an ability to train on out-of-domain data to generate abstracts for a new target domain. 9 Acknowledgments This work was supported in part by National Science Foundation Grant IIS-0968450 and a gift from Boeing. We thank Moontae Lee, Myle Ott, Yiye Ruan, Chenhao Tan, and the ACL reviewers for valuable suggestions and advice on various aspects of this work. 1403 References Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 502–512, Stroudsburg, PA, USA. Association for Computational Linguistics. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple-sequence alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 16–23, Stroudsburg, PA, USA. Association for Computational Linguistics. Trung H. Bui, Matthew Frampton, John Dowding, and Stanley Peters. 2009. Extracting decisions from multi-party dialogue using directed graphical models and semantic similarity. In Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL ’09, pages 235–243, Stroudsburg, PA, USA. Association for Computational Linguistics. Giuseppe Carenini, Gabriel Murray, and Raymond Ng. 2011. Methods for Mining and Summarizing Text Conversations. Morgan & Claypool Publishers. Harr Chen, Edward Benson, Tahira Naseem, and Regina Barzilay. 2011. In-domain relation discovery with meta-constraints via posterior regularization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 530–540, Stroudsburg, PA, USA. Association for Computational Linguistics. Hoa T. Dang. 2005. Overview of DUC 2005. In Document Understanding Conference. Richard Durbin, Sean R. Eddy, Anders Krogh, and Graeme Mitchison. 1998. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, July. Raquel Fern´andez, Matthew Frampton, John Dowding, Anish Adukuzhiyil, Patrick Ehlen, and Stanley Peters. 2008. Identifying relevant phrases to summarize decisions in spoken meetings. In INTERSPEECH, pages 78–81. Michel Galley. 2006. A skip-chain conditional random field for ranking meeting utterances by importance. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ’06, pages 364–372, Stroudsburg, PA, USA. Association for Computational Linguistics. David Graff. 2003. English Gigaword. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 609–617, Stroudsburg, PA, USA. Association for Computational Linguistics. A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The icsi meeting corpus. volume 1, pages I–364–I–367 vol.1. Thorsten Joachims. 1998. Text categorization with suport vector machines: Learning with many relevant features. In Proceedings of the 10th European Conference on Machine Learning, ECML ’98, pages 137–142, London, UK, UK. Springer-Verlag. Thorsten Joachims. 1999. Advances in kernel methods. chapter Making large-scale support vector machine learning practical, pages 169–184. MIT Press, Cambridge, MA, USA. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 423– 430, Stroudsburg, PA, USA. Association for Computational Linguistics. Ioannis Konstas and Mirella Lapata. 2012. Conceptto-text generation via discriminative reranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 369–378, Stroudsburg, PA, USA. Association for Computational Linguistics. J R Landis and G G Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174. Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 912–920, Stroudsburg, PA, USA. Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, pages 71–78. Fei Liu and Yang Liu. 2009. From extractive to abstractive meeting summaries: can it be done by sentence compression? In Proceedings of the ACLIJCNLP 2009 Conference Short Papers, ACLShort ’09, pages 261–264, Stroudsburg, PA, USA. Association for Computational Linguistics. 1404 I. Mccowan, G. Lathoud, M. Lincoln, A. Lisowska, W. Post, D. Reidsma, and P. Wellner. 2005. The ami meeting corpus. In In: Proceedings Measuring Behavior 2005, 5th International Conference on Methods and Techniques in Behavioral Research. L.P.J.J. Noldus, F. Grieco, L.W.S. Loijens and P.H. Zimmerman (Eds.), Wageningen: Noldus Information Technology. Gabriel Murray, Steve Renals, and Jean Carletta. 2005. Extractive summarization of meeting recordings. In INTERSPEECH, pages 593–596. Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2010a. Interpretation and transformation for abstracting conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 894–902, Stroudsburg, PA, USA. Association for Computational Linguistics. Gabriel Murray, Giuseppe Carenini, and Raymond T. Ng. 2010b. Generating and validating abstracts of meeting conversations: a user study. In INLG. S. B. Needleman and C. D. Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of molecular biology, 48(3):443–453, March. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge University Press, New York, NY, USA. Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-T¨ur. 2010. Long story short - global unsupervised models for keyphrase based meeting summarization. Speech Commun., 52(10):801–815, October. Oana Sandu, Giuseppe Carenini, Gabriel Murray, and Raymond Ng. 2010. Domain adaptation to summarize human conversations. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, DANLP 2010, pages 16–22, Stroudsburg, PA, USA. Association for Computational Linguistics. Alex J. Smola and Bernhard Sch¨olkopf. 2004. A tutorial on support vector regression. Statistics and Computing, 14(3):199–222, August. Andreas Stolcke. 2002. SRILM – an extensible language modeling toolkit. In Proceedings of ICSLP, volume 2, pages 901–904, Denver, USA. Marilyn A. Walker, Owen Rambow, and Monica Rogati. 2001. Spot: a trainable sentence planner. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL ’01, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Lu Wang and Claire Cardie. 2011. Summarizing decisions in spoken meetings. In Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages, WASDGML ’11, pages 16–24, Stroudsburg, PA, USA. Association for Computational Linguistics. Lu Wang and Claire Cardie. 2012. Focused meeting summarization via unsupervised relation extraction. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL ’12, pages 304–313, Stroudsburg, PA, USA. Association for Computational Linguistics. Lusheng Wang and Tao Jiang. 1994. On the complexity of multiple sequence alignment. Journal of Computational Biology, 1(4):337–348. Shasha Xie, Yang Liu, and Hui Lin. 2008. Evaluating the effectiveness of features and sampling in extractive meeting summarization. In in Proc. of IEEE Spoken Language Technology (SLT. 1405
2013
137
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1406–1415, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Statistical NLG Framework for Aggregated Planning and Realization Ravi Kondadadi∗, Blake Howald and Frank Schilder Thomson Reuters, Research & Development 610 Opperman Drive, Eagan, MN 55123 [email protected] Abstract We present a hybrid natural language generation (NLG) system that consolidates macro and micro planning and surface realization tasks into one statistical learning process. Our novel approach is based on deriving a template bank automatically from a corpus of texts from a target domain. First, we identify domain specific entity tags and Discourse Representation Structures on a per sentence basis. Each sentence is then organized into semantically similar groups (representing a domain specific concept) by k-means clustering. After this semi-automatic processing (human review of cluster assignments), a number of corpus–level statistics are compiled and used as features by a ranking SVM to develop model weights from a training corpus. At generation time, a set of input data, the collection of semantically organized templates, and the model weights are used to select optimal templates. Our system is evaluated with automatic, non–expert crowdsourced and expert evaluation metrics. We also introduce a novel automatic metric – syntactic variability – that represents linguistic variation as a measure of unique template sequences across a collection of automatically generated documents. The metrics for generated weather and biography texts fall within acceptable ranges. In sum, we argue that our statistical approach to NLG reduces the need for complicated knowledge-based architectures and readily adapts to different domains with reduced development time. ∗*Ravi Kondadadi is now affiliated with Nuance Communications, Inc. 1 Introduction NLG is the process of generating natural-sounding text from non-linguistic inputs. A typical NLG system contains three main components: (1) Document (Macro) Planning - deciding what content should be realized in the output and how it should be structured; (2) Sentence (Micro) planning generating a detailed sentence specification and selecting appropriate referring expressions; and (3) Surface Realization - generating the final text after applying morphological modifications based on syntactic rules (see e.g., Bateman and Zock (2003), Reiter and Dale (2000) and McKeown (1985)). However, document planning is arguably one of the most crucial components of an NLG system and is responsible for making the texts express the desired communicative goal in a coherent structure. If the document planning stage fails, the communicative goal of the generated text will not be met even if the other two stages are perfect. While most traditional systems simplify development by using a pipelined approach where (1-3) are executed in a sequence, this can result in errors at one stage propagating to successive stages (see e.g., Robin and McKeown (1996)). We propose a hybrid framework that combines (1-3) by converting data to text in one single process. Most NLG systems fall into two broad categories: knowledge-based and statistical. Knowledge-based systems heavily depend on having domain expertise to come up with handcrafted rules at each stage of a pipeline. Although knowledge-based systems can produce high quality text, they are (1) very expensive to build, involving a lot of discussion with the end users of the system for the document planning stage alone; (2) have limited linguistic coverage, as it is time consuming to capture linguistic variation; and (3) one has to start from scratch for each new domain because the developed components cannot be reused. 1406 Statistical systems, on the other hand, are fairly inexpensive, more adaptable and rely on having historical data for the given domain. Coverage is likely to be high if more historical data is available. The main disadvantage with statistical systems is that they are more prone to errors and the output text may not be coherent as there are less constraints on the generated text. Our framework is a hybrid of statistical and template-based systems. Many knowledge-based systems use templates to generate text. A template structure contains “gaps” that are filled to generate the output. The idea is to create a lot of templates from the historical data and select the right template based on some constraints. To the best of our knowledge, this is the first hybrid statistical-template-based system that combines all three stages of NLG. Experiments with different variants of our system (for biography and weather subject matter domains) demonstrate that our system generates reasonable texts. Also, in addition to the standard metrics used to evaluate NLG systems (e.g., BLEU, NIST, etc.), we present a unique text evaluation metric called syntactic variability to measure the linguistic variation of generated texts. This metric applies to the document collection level and is based on computing the number of unique template sequences among all the generated texts. A higher number indicates the texts are more variable and naturalsounding whereas a lower number shows they are more redundant. We argue that this metric is useful for evaluating template-based systems and for any type of text generation for domains where linguistic variability is favored (e.g., the user is expected to go through more than one document in the same session). The main contributions of this paper are (1) A statistical NLG system that combines document and sentence planning and surface realization into one single process; and (2) A new metric – syntactic variability – is proposed to measure the syntactic and morphological variability of the generated texts. We believe this is the first work to propose an automatic metric to measure linguistic variability of generated texts in NLG. Section 2 provides an overview of related work on NLG. We present our main system in Section 3. The system is evaluated and discussed in Section 4. Finally, we conclude in Section 5 and point out future directions of research. 2 Background Typically, knowledge-based NLG systems are implemented by rules and, as mentioned above, have a pipelined architecture for the document and sentence planning stages and surface realization (Hovy, 1993; Moore and Paris, 1993). However, document planning is arguably the most important task (Sripada et al., 2001). It follows that approaches to document planning are rule-based as well and, concomitantly, are usually domain specific. For example, Bouayad-Agha, et al. (2011) proposed document planning based on an ontology knowledge base to generate football summaries. For rule–based systems, rules exist for selecting content to grammatical choices to postprocessing (e.g., pronoun generation). These rules are often tailored to a given system, with input from multiple experts; consequently, there is a high associated development cost (e.g., 12 person months for the SUMTIME-METEO system (Belz, 2007)). Statistical approaches can reduce extensive development time by relying on corpus data to “learn” rules for one or more components of an NLG system (Langkilde and Knight, 1998). For example, Duboue and McKeown (2003) proposed a statistical approach to extract content selection rules for biography descriptions. Further, statistical approaches should be more adaptable to different domains than their rule-based equivalents (Angeli et al., 2012). For example, Barzilay and Lapata (2005) formulated content selection as a classification task to produce football summaries and Kelly et al. (2009) extended Barzilay and Lapata’s approach for generating match reports for cricket. The present work builds on Howald et al. (2013) where, in a given corpus, a combination of domain specific named entity tagging and clustering sentences (based on semantic predicates) were used to generate templates. However, while the system consolidated both sentence planning and surface realization with this approach (described in more detail in Section 3), the document plan was given via the input data and sequencing information was present in training documents. For the present research, we introduce a similar method that leverages the distributions of document–level features in the training corpus to incorporate a statistical document planning component. Consequently, we are able to create a streamlined statistical NLG architecture that balances natural 1407 human–like variability with appropriate and accurate information. 3 Methodology In order to generate text for a given domain our system runs input data through a statistical ranking model to select a sequence of templates that best fit the input data (E). In order to build the ranking model, our system takes historical data (corpus) for the domain through four components: (A) preprocessing; (B) “conceptual unit” creation; (C) collecting statistics; and (D) ranking model building (summarized in Figure 1). In this section, we describe each component in detail. Figure 1: System Architecture. 3.1 Preprocessing The first component processes the given corpus to extract templates. We assume that each document in the corpus is classified to a specific domain. Preprocessing involves uncovering the underlying semantic structure of the corpus and using this as a foundation for template creation (Lu et al., 2009; Lu and Ng, 2011; Konstas and Lapata, 2012). We first split each document in the corpus into sentences and create a shallow Discourse Representation Structure (following Discourse Representation Theory (Kamp and Reyle, 1993)) of each sentence. The DRS consists of semantic predicates and named entity tags. We use Boxer semantic analyzer (Bos, 2008) to extract semantic predicates such as EVENT or DATE. In parallel, domain specific named entity tags are identified and, in conjunction with the semantic predicates, are used to create templates. We developed the named-entity tagger for the weather domain ourselves. To tag entities in the biography domain, we used OpenCalais (www.opencalais.com). For example, in the biography in (1), the conceptual meaning (semantic predicates and domain-specific entities) of sentences (a-b) are represented in (c-d). The corresponding templates are showing in (e-f). (1) Sentence a. Mr. Mitsutaka Kambe has been serving as Managing Director of the 77 Bank, Ltd. since June 27, 2008. b. He holds a Bachelor’s in finance from USC and a MBA from UCLA. Conceptual Meaning c. SERVING | TITLE | PERSON | COMPANY | DATE d. HOLDS | DEGREE | SUBJECT | INSTITUTION| EVENT Templates e. [person] has been serving as [title] of the [company] since [date]. f. [person] holds a [degree] in [subject] from [institution] and a [degree] from [institution]. The outputs of the preprocessing stage are the template bank and predicate information for each template in the corpus.1 3.2 Creating Conceptual Units The next step is to create conceptual units for the corpus by clustering templates. This is a semiautomatic process where we use the predicate information for each template to compute similarity between templates. We use k-means clustering with k (equivalent to the number of semantic concepts in the domain) set to an arbitrarily high value (100) to over-generate (using the WEKA toolkit (Witten and Frank, 2005)). This allows for easier manual verification of the generated clusters and we merge them if necessary. We assign a unique identifier called a CuId (Conceptual Unit Identifier) to each cluster, which represents a “conceptual unit”. We associate each template in the corpus to a corresponding CuId. For example, in (2), using the templates in (1e-f), the identified named entities are assigned to a clustered CuId (2a-b). (2) Conceptual Units a. {CuId : 000} – [person] has been serving as [title] of the [company] since [date]. b. {CuId : 001} – [person] holds a [degree] in [subject] from [institution] and a [degree] from [institution]. At this stage, we will have a set of conceptual units with corresponding template collections (see Howald et al. (2013) for a further explanation of Sections 3.1-3.2). 1A similar approach to the clustering of semantic content is found in Duboue and McKeown (2003), where text with stopwords removed were used as semantic input. Boxer provides a similar representation with the addition of domain general tags. However, to contrast our work from Duboue and McKeown, which focused on content selection, we are focused on learning templates from the semantic representations for the complete generation system (covering content selection, aggregation, sentence and document planning). 1408 3.3 Collecting Corpus Statistics After identifying the different conceptual units and the template bank, we collect a number of statistics from the corpus: • Frequency distribution of templates overall and per position • Frequency distribution of CuIds overall and per position • Average number of entity tags by CuId as well as the entity distribution by CuId • Average number of entity tags by position as well as the entity distribution by position • Average number of words per CuId. • Average number of words per CuId and position combination. • Average number of words per position • Frequency distribution of the main verbs by position • Frequency distribution of CuId sequences (bigrams and trigrams only) overall and per position • Frequency distribution of template sequences (bigrams and trigrams only) overall and per position • Frequency distribution of entity tag sequences overall and per position • The average, minimum, maximum number of CuIds across all documents As discussed in the next section, these statistics are turned into features used for building a ranking model in the next component. 3.4 Building a ranking model The core component of our system is a statistical model that ranks a set of templates for a given position (sentence 1, sentence 2, ..., sentence n) based on the input data. The input data in our tasks was extracted from a training document; this serves as a temporary surrogate to a database. The task is to learn the ranks of all the templates from all CuIds at each position. To generate the training data, we first filter the templates that have named entity tags not specified in the input data. This will make sure the generated text does not have any unfilled entity tags. We then rank templates according to the Levenshtein edit distance (Levenshtein, 1966) from the template corresponding to the current sentence in the training document (using the top 10 ranked templates in training for ease of processing effort). We experimented with other ranking schemes such as entity-based similarity (similarity between entity sequences in the templates) and a combination of edit-distance based and entity-based similarities. We obtained better results with edit distance. For each template, we generate the following features to build the ranking model. Most of the features are based on the corpus statistics mentioned above. • CuId given position: This is a binary feature where the current CuId is either the same as the most frequent CuId for the position (1) or not (0). • Overlap of named entities: Number of common entities between current CuId and most likely CuId for the position • Prior template: Probability of the sequence of templates selected at the previous position and the current template (iterated for the last three positions). • Prior CuId: Probability of the sequence of the CuId selected at the previous position and the current CuId (iterated for the last three positions). • Difference in number of words: Absolute difference between number of words for current template and average number of words for the CuId • Difference in number of words given position: Absolute difference between number of words for current template and average number of words for CuId at given position • Percentage of unused data: This feature represents the portion of the unused input so far. • Difference in number of named entities: Absolute difference between the number of named entities in the current template and the average number of named entities for the current position • Most frequent verb for the position: Binary valued feature where the main verb of the template belongs to the most frequent verb group given the position is either the same (1) or not (0). • Average number of words used: Ratio of number of words in the generated text so far to the average number of words. • Average number of entities: Ratio of number of named entities in the generated text so far to the average number of named entities. • Most likely CuId given position and previous CuId: Binary feature indicating if the current CuId is most likely given the position and the previous CuId. • Similarity between the most likely template in CuId and current template: Edit distance between the current template and the most likely template for the current CuId. • Similarity between the most likely template in CuId given position and current template: Edit distance between the current template and the most likely template for the current CuId at the current position. We used a linear kernel for a ranking SVM (Joachims, 2002) (cost set to total queries) to learn the weights associated with each feature for the different domains. 3.5 Generation At generation time, our system has a set of input data, a semantically organized template bank (collection of templates organized by CuId) and a model from training on the documents for a given domain. We first filter out those templates that contain a named entity tag not present in the input data. Then, we compute a score for each of the remaining templates from the feature values and the feature weights from the model. The template with the highest overall score is selected and filled with matching entity tags from the input data and 1409 appended to the generated text. Before generating the next sentence, we track those entities used in the initial sentence generation and decide to either remove those entities from the input data or keep the entity for one or more additional sentence generations. For example, in the biography discourses, the name of the person may occur only once in the input data, but it may be useful for creating good texts to have that person’s name available for subsequent generations. To illustrate in (3), if we remove James Smithton from the input data after the initial generation, an irrelevant sentence (d) is generated as the input data will only have one company after the removal of James Smithton and the model will only select a template with one company. If we keep James Smithton, then the generations in (a-b) are more cohesive. (3) Use more than once a. Mr. James Smithton was appointed CFO at Fordway Internation in April. b. Previously, Mr. Smithton was CFO of the Keyes Development Group. Use once and remove c. Mr. James Smithton was appointed CFO at Fordway Internation in April. d. Keyes Development Group is a venture capital firm. Deciding on what type of entities and how to remove them is different for each domain. For example, some entities are very unique to a text and should not be made available for subsequent generations as doing so would lead to unwanted redundancies (e.g., mentioning the name of current company in a biography discourse more than once as in (3)) and some entities are general and should be retained. Our system possesses the ability to monitor the data usage from historical data and we can set parameters (based on the distribution of entities) on the usage to ensure coherent generations for a given domain. Once the input data has been modified (i.e., an entity have been removed, replaced or retained), it serves as the new input data for the next sentence generation. This process repeats until reaching the minimum number of sentences for the domain (determined from the training corpus statistic) and then continues until all of the remaining input data is consumed (and not to exceed the predetermined maximum number of sentences, also determined from the training corpus statistic). 4 Evaluation and Discussion In this section, we first discuss the corpus data used to train and generate texts. Then, the results of both automatic and human evaluations of our system’s generations against the original and baseline texts are considered as a means of determining performance. For all experiments reported in this section, the baseline system selects the most frequent conceptual unit at the given position, chooses the most likely template for the conceptual unit, and fills the template with input data. The above process is repeated until the number of sentences is less than or equal to the average number of sentences for the given domain. 4.1 Data We ran our system on two different domains: corporate officer and director biographies and offshore oil rig weather reports from the SUMTIMEMETEO corpus ((Reiter et al., 2005)). The biography domain includes 1150 texts ranging from 3-17 sentences and the weather domain includes 1045 weather reports ranging from 1-6 sentences.2 We used a training-test(generation) split of 70/30. (4) provides generation comparisons for the system ( DocSys), baseline ( DocBase) and original ( DocOrig) randomly selected text snippets from each domain. The variability of the generated texts ranges from a close similarity to slightly shorter - not an uncommon (Belz and Reiter, 2006), but not necessarily detrimental, observation for NLG systems (van Deemter et al., 2005). (4) Weather DocOrig a. Another weak cold front will move ne to Cornwall by later Friday. Weather DocSys b. Another weak cold front will move ne to Cornwall during Friday. Weather DocBase c. Another weak cold front from ne through the Cornwall will remain slow moving. Bio DocOrig d. He previously served as Director of Sales Planning and Manager of Loan Center. Bio DocSys e. He previously served as Director of Sales in Loan Center of the Company. Bio DocBase 2The SUMTIME-METEO project is a common bench mark in NLG. However, we provide no comparison between our system and SUMTIME-METEO as our system utilized the generated forecasts from SUMTIME-METEO’s system as the historical data. We cannot compare with other statistical generation systems like (Belz, 2007) as they only focussed on the part of the forecasts the predicts wind characteristics whereas our system generates the complete forecasts. 1410 f. He previously served as Director of Sales of the Company. The DocSys and DocBase generations are largely grammatical and coherent on the surface with some variance, but there are graded semantic variations (e.g., Director of Sales Planning vs. Director of Sales (4g-h) and move ne to Cornwall vs. from ne through the Cornwall). Both automatic and human evaluations are required in NLG to determine the impact of these variances on the understandability of the texts in general (non-experts) and as they are representative of particular subject matter domains (experts). The following sections discuss the evaluation results. 4.2 Automatic Metrics We used BLEU–4 (Papineni et al., 2002), METEOR (v.1.3) (Denkowski and Lavie, 2011) to evaluate the texts at document level. Both BLEU–4 and METEOR originate from machine translation research. BLEU–4 measures the degree of 4-gram overlap between documents. METEOR uses a unigram weighted f–score less a penalty based on chunking dissimilarity. These metrics only evaluate the text on a document level but fail to identify “syntactic repetitiveness” across documents in a document collection. This is an important characteristic of a document collection to avoid banality. To address this issue, we propose a new automatic metric called syntactic variability. In order to compute this metric, each document should be represented as a sequence of templates by associating each sentence in the document with a template in the template bank. Syntactic variability is defined as the percentage of unique template sequences across all generated documents. It ranges between 0 and 1. A higher value indicates that more documents in the collection are linguistically different from each other and a value closer to zero shows that most of documents have the similar language despite different input data.3 As indicated in Figure 2, the BLEU-4 scores are low for all DocSys and DocBase generations (as compared to DocOrig) for each domain. However, the METEOR scores, while low overall (ranging from .201-.437) are noticeably increased over BLEU-4 (which ranges from .199-.320). Given the nature of each metric, the results indicate that the generated and baseline texts have 3Of course, syntactic and semantic repetitiveness could be captured by syntactic variability, but only if this is the nature of the underlying historical data - financial texts tend to be fairly repetitive. Figure 2: Automatic Evaluations. very different surface realizations compared to the originals (low BLEU-4), but are still capturing the content of the originals (higher METEOR). Both BLEU–4 and METEOR measure the similarity of the generated text to the original text, but fail to penalize repetitiveness across texts, which is addressed by the syntactic variability metric. There is no statistically significant difference between DocSys and DocBase generations for METEOR and BLEU–4.4 However, there is a statistically significant difference in the syntactic variability metric for both domains (weather - χ2=137.16, d.f.=1, p<.0001; biography - χ2=96.641, d.f.=1, p<.0001) - the variability of the DocSys generations is greater than the DocBase generations, which shows that texts generated by our system are more variable than the baseline texts. The use of automatic metrics is a common evaluation method in NLG, but they must be reconciled against non–expert and expert level evaluations. 4.3 Non-Expert Human Evaluations Two sets of crowdsourced human evaluation tasks (run on CrowdFlower) were constructed to compare against the automatic metrics: (1) an understandability evaluation of the entire text on a threepoint scale: Fluent = no grammatical or informative barriers; Understandable = some grammatical or informative barriers; Disfluent = significant grammatical or informative barriers; and (2) a sentence–level preference between sentence pairs (e.g., “Do you prefer Sentence A (from DocOrig) or the corresponding Sentence B (from DocBase/DocSys)”). 4BLEU–4: weather - χ2=1.418, d.f.=1, p=.230; biography - χ2=0.311, d.f.=1, p=.354. METEOR: weather - χ2=1.016, d.f.=1, p=.313; biography - χ2=0.851, d.f.=1, p=.354. 1411 Over 100 native English speakers contributed, each one restricted to providing no more than 50 responses and only after they successfully answered 4 “gold data” questions correctly. We also omitted those evaluators with a disproportionately high response rate. No other data was collected on the contributors (although geographic data (country, region, city) and IP addresses were available). For the sentence–level preference task, the pair orderings were randomized to prevent click bias. For the text–understandability task, 40 documents were chosen at random from the DocOrig test set along with the corresponding 40 DocSys and DocBase generations (240 documents total/120 for each domain). 8 judgments per document were solicited from the crowd (1920 total judgments, 69.51 average agreement) and are summarized in Figures 3 and 4 (biography and weather respectively). If the system is performing well and the ranking model is actually contributing to increased performance, the accepted trend should be that the DocOrig texts are more fluent and preferred compared to both the DocSys and DocBase systems. However, the differences between DocOrig and DocSys will not be significant, the differences between DocOrig and DocBase and DocSys and DocBase will be significantly different. Figure 3: Biography Text Evaluations. Focusing on fluency ratings, it is expected that the DocOrig generations will have the highest fluency (as they are human generated). Further, if the DocSys is performing well, it is expected that the fluency rating will be less than the DocOrig and higher than DocBase. Figure 3, which shows the biography text evaluations, demonstrates this acceptable distribution of performances. For the weather discourses, as evident from Figure 4, the acceptable trend holds between the DocSys and DocBase generations, and the DocSys generation fluency is actually slightly higher than DocOrig. This is possibly because the DocOrig texts are from a particular subject matter weather forecasts for offshore oil rigs in the U.K. - which may be difficult for people in general to understand. Nonetheless, the demonstrated trend is favorable to our system. In terms of significance, there are no statistically significant differences between the systems for weather (DocOrig vs. DocSys - χ2=.347, d.f.=1, p=.555; DocOrig vs. DocBase - χ2=.090, d.f.=1, p=.764; DocSys vs. DocBase - χ2=.790, d.f.=1, p=.373). While this is a good result for comparing DocOrig and DocSys generations, it is not for the other pairs. However, numerically, the trend is in the right direction despite not being able to demonstrate significance. For biography, the trend fits nicely both numerically and in terms of statistical significance (DocOrig vs. DocSys χ2=5.094, d.f.=1, p=.024; DocOrig vs. DocBase χ2=35.171, d.f.=1, p<.0001; DocSys vs. DocBase - χ2=14.000, d.f.=1, p<.0001). Figure 4: Weather Text Evaluations. For the sentence preference task, equivalent sentences across the 120 documents were chosen at random (80 sentences from biography and 74 sentences from weather). 8 judgments per comparison were solicited from the crowd (3758 total judgments, 75.87 average agreement) and are summarized in Figures 5 and 6 (biography and weather, respectively). Similar to the text–understandability task, an acceptable performance pattern should include the DocOrig texts being preferred to both DocSys and DocBase generations and the DocSys generation preferred to the DocBase. The closer the DocSys generation is to the DocOrig, the better DocSys is performing. The biography domain illus1412 Figure 5: Biography Sentence Evaluations. trates this scenario (Figure 5) where the results are similar to the text-understandability experiments. In contrast, for weather domain, sentences from DocBase system were preferred to our system’s (Figure 6). We looked at the cases where the preferences were in favor of DocBase. It appears that because of high syntactic variability, our system can produce quite complex sentences where as the non-experts seem to prefer shorter and simpler sentences because of the complexity of the text. In terms of significance, there are no statistically significant differences between the systems for weather (DocOrig vs. DocSys - χ2=6.48, d.f.=1, p=.011; DocOrig vs. DocBase - χ2=.720, d.f.=1, p=.396; DocSys vs. DocBase - χ2=.720, d.f.=1, p=.396). The trend is different compared to the fluency metric above in that the DocBase system is outperforming the DocOrig generations to an almost statistically significant difference - the remaining comparisons follow the trend. We believe that this is for similar reasons stated above - i.e., the generation may be a more digestible version of a technical document. More problematic is the results of the biography evaluations. Here there is a statistically significant difference between the DocSys and DocOrig and no statistically significant difference between the DocSys and DocBase generations (DocOrig vs. DocSys - χ2=76.880, d.f.=1, p<.0001; DocOrig vs. DocBase - χ2=38.720, d.f.=1, p<.0001; DocSys vs. DocBase - χ2=.720, d.f.=1, p=.396). Again, this distribution of preferences is numerically similar to the trend we would like to see, but the statistical significance indicates that there is some ground to make up. Expert evaluations are potentially informative for identifying specific shortcomings and how best to address them. Figure 6: Weather Sentence Evaluations. 4.4 Expert Human Evaluations We performed expert evaluations for the biography domain only as we do not have access to weather experts. The four biography reviewers are journalists who write short biographies for news archives. For the biography domain, evaluations of the texts were largely similar to the evaluations of the non-expert crowd (76.22 average agreement for the sentence–preference task and 72.95 for the text–understandability task). For example, the disfluent ratings were highest for the DocBase generations and lowest for the DocOrig generations. Also, the fluent ratings were highest for the DocOrig generations, and while the combined fluent and understandable are higher for DocSys as compared to DocBase, the DocBase generations had a 10% higher fluent score (58.22%) as compared to the DocSys fluent score (47.97%). Based on notes from the reviewers, the succinctness of the the DocBase generations are preferred in some ways as they are in keeping with certain editorial standards. This is further reflected in the sentence preferences being 70% in favor of the DocBase generations as compared to the DocSys generations (all other sentence comparisons were consistent with the non-expert crowd). These expert evaluations provide much needed clarity to the NLG process. Overall, our system is generating clearly acceptable texts. Further, there are enough parameters inherent in the system to tune to different domain expectations. This is an encouraging result considering that no experts were involved in the development of the system a key contrast to many other existing (especially rule-based) NLG systems. 1413 5 Conclusions and Future Work We have presented a hybrid (template-based and statistical), single–staged NLG system that generates natural sounding texts and is domain– adaptable. Our experiments with both experts and non–experts demonstrate that the system-generated texts are comparable to human– authored texts. The development time to adapt our system to new domains is small compared to other NLG systems; around a week to adapt the system to weather and biography domains. Most of the development time was spent on creating the domain-specific entity taggers for the weather domain. The development time would be reduced to hours if the historical data for a domain is readily available with the corresponding input data. The main limitation of our system is that it requires significant historical data. Our system does consolidate many traditional components (macroand micro-planning, lexical choice and aggregation),5 but the system cannot be applied to the domains with no historical data. The quality and the linguistic variability of the generated text is directly proportional to the amount of historical data available. We also presented a new automatic metric to evaluate generated texts at document collection level to identify boilerplate texts. This metric computes “syntactic repetitiveness” by counting the number of unique template sequences across the given document collection. Future work will focus on extending our framework by adding additional features to the model that could improve the quality of the generated text. For example, most NLG pipelines have a separate component responsible for referring expression generation (Krahmer and van Deemter, 2012). While we address the associated concern of data consumption in Section 3.5, we currently do not have any features that would handle referring expression generation. We believe that this is possible by identifying referring expressions in templates and adding features to the model to give higher scores to the templates having relevant referring expressions. We also would like to investigate using all the top-scored templates instead of the highest-scoring template. This would help achieve better syntactic-variability scores by producing more natural-sounding texts. 5Lexical choice and aggregation are “handled” insofar as their existence in the historical data. Acknowledgments This research is made possible by Thomson Reuters Global Resources (TRGR) with particular thanks to Peter Pircher, Jaclyn Sprtel and Ben Hachey for significant support. Thank you also to Khalid Al-Kofahi for encouragment, Leszek Michalak and Andrew Lipstein for expert evaluations and three anonymous reviewers for constructive feedback. References Gabor Angeli, Percy Liang, and Dan Klein. 2012. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods for Natural Language Processing (EMNLP 2010), pages 502–512. Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the 2005 Conference on Empirical Methods for Natural Language Processing (EMNLP 2005), pages 331–338. John Bateman and Michael Zock. 2003. Natural language generation. In R. Mitkov, editor, Oxford Handbook of Computational Linguistics, Research in Computational Semantics, pages 284–304. Oxford University Press, Oxford. Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In Proceedings of the European Association for Computational Linguistics (EACL’06), pages 313–320. Anja Belz. 2007. Probabilistic generation of weather forecast texts. In Proceedings of Human Language Technologies 2007: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT’07), pages 164–171. Johan Bos. 2008. Wide-coverage semantic analysis with Boxer. In J. Bos and R. Delmonte, editors, Semantics in Text Processing. STEP 2008 Conference Proceedings, volume 1 of Research in Computational Semantics, pages 277–286. College Publications. Nadjet Bouayad-Agha, Gerard Casamayor, and Leo Wanner. 2011. Content selection from an ontologybased knowledge base for the generation of football summaries. In Proceedings of the 13th European Workshop on Natural Language Generation (ENLG), pages 72–81. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the EMNLP 2011 Workshop on Statistical Machine Translation, pages 85–91. 1414 Pablo A. Duboue and Kathleen R. McKeown. 2003. Statistical acquisition of content selection rules for natural language generation. In Proceedings of the 2003 Conference on Empirical Methods for Natural Language Processing (EMNLP 2003), pages 2003– 2007. Eduard H. Hovy. 1993. Automated discourse generation using discourse structure relations. Artificial Intelligence, 63:341–385. Blake Howald, Ravi Kondadadi, and Frank Schilder. 2013. Domain adaptable semantic clustering in statistical nlg. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013), pages 143–154. Association for Computational Linguistics, March. Thorsten Joachims. 2002. Learning to Classify Text Using Support Vector Machines. Kluwer. Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic; An Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and DRT. Kluwer, Dordrecht. Colin Kelly, Ann Copestake, and Nikiforos Karamanis. 2009. Investigating content selection for language generation using machine learning. In Proceedings of the 12th European Workshop on Natural Language Generation (ENLG), pages 130–137. Ioannis Konstas and Mirella Lapata. 2012. Conceptto-text generation via discriminative reranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 369– 378. Emiel Krahmer and Kees van Deemter. 2012. Computational generation of referring expression: A survey. Computational Linguistics, 38(1):173–218. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics (ACL’98), pages 704–710. Vladimir Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10:707–710. Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-to-string model for language generation from typed lambda calculus expressions. In Proceedings of the 2011 Conference on Empirical Methods for Natural Language Processing (EMNLP 2011), pages 1611–1622. Wei Lu, Hwee Tou Ng, and Wee Sun Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of the 2009 Conference on Empirical Methods for Natural Language Processing (EMNLP 2009), pages 400–409. Kathleen R. McKeown. 1985. Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text. Cambridge University Press. Johanna D. Moore and Cecile L. Paris. 1993. Planning text for advisory dialogues: Capturing intentional and rhetorical information. Computational Linguistics, 19(4):651–694. Kishore Papineni, Slim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL’02), pages 311–318. Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press. Ehud Reiter, Somayajulu Sripada, Jim Hunter, and Jin Yu. 2005. Choosing words in computer-generated weather forecasts. Artificial Intelligence, 167:137– 169. Jacques Robin and Kathy McKeown. 1996. Exmpirically designing and evaluating a new revision-based model for summary generation. Artificial Intelligence, 85(1-2). Somayajulu Sripada, Ehud Reiter, Jim Hunter, and Jin Yu. 2001. A two-stage model for content determination. In Proceedings of the 8th European Workshop on Natural Language Generation (ENLG), pages 1–8. Kees van Deemter, Mari¨et Theune, and Emiel Krahmer. 2005. Real vs. template-based natural language generation: a false opposition? Computational Linguistics, 31(1):15–24. Ian Witten and Eibe Frank. 2005. Data Mining: Practical Machine Learning Techniques with Java Implementation (2nd Ed.). Morgan Kaufmann, San Francisco, CA. 1415
2013
138
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1416–1424, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Models of Translation Competitions Mark Hopkins and Jonathan May SDL Research 6060 Center Drive, Suite 150 Los Angeles, CA 90045 {mhopkins,jmay}@sdl.com Abstract What do we want to learn from a translation competition and how do we learn it with confidence? We argue that a disproportionate focus on ranking competition participants has led to lots of different rankings, but little insight about which rankings we should trust. In response, we provide the first framework that allows an empirical comparison of different analyses of competition results. We then use this framework to compare several analytical models on data from the Workshop on Machine Translation (WMT). 1 The WMT Translation Competition Every year, the Workshop on Machine Translation (WMT) conducts a competition between machine translation systems. The WMT organizers invite research groups to submit translation systems in eight different tracks: Czech to/from English, French to/from English, German to/from English, and Spanish to/from English. For each track, the organizers also assemble a panel of judges, typically machine translation specialists.1 The role of a judge is to repeatedly rank five different translations of the same source text. Ties are permitted. In Table 1, we show an example2 where a judge (we’ll call him “jdoe”) has ranked five translations of the French sentence “Il ne va pas.” Each such elicitation encodes ten pairwise comparisons, as shown in Table 2. For each competition track, WMT typically elicits between 5000 and 20000 comparisons. Once the elicitation process is complete, WMT faces a large database of comparisons and a question that must be answered: whose system is the best? 1Although in recent competitions, some of the judging has also been crowdsourced (Callison-Burch et al., 2010). 2The example does not use actual system output. rank system translation 1 bbn “He does not go.” 2 (tie) uedin “He goes not.” 2 (tie) jhu “He did not go.” 4 cmu “He go not.” 5 kit “He not go.” Table 1: WMT elicits preferences by asking judges to simultaneously rank five translations, with ties permitted. In this (fictional) example, the source sentence is the French “Il ne va pas.” source text sys1 sys2 judge preference “Il ne va pas.” bbn cmu jdoe 1 “Il ne va pas.” bbn jhu jdoe 1 “Il ne va pas.” bbn kit jdoe 1 “Il ne va pas.” bbn uedin jdoe 1 “Il ne va pas.” cmu jhu jdoe 2 “Il ne va pas.” cmu kit jdoe 1 “Il ne va pas.” cmu uedin jdoe 2 “Il ne va pas.” jhu kit jdoe 1 “Il ne va pas.” jhu uedin jdoe 0 “Il ne va pas.” kit uedin jdoe 2 Table 2: Pairwise comparisons encoded by Table 1. A preference of 0 means neither translation was preferred. Otherwise the preference specifies the preferred system. 2 A Ranking Problem For several years, WMT used the following heuristic for ranking the translation systems: ORIGWMT(s) = win(s) + tie(s) win(s) + tie(s) + loss(s) For system s, win(s) is the number of pairwise comparisons in which s was preferred, loss(s) is the number of comparisons in which s was dispreferred, and tie(s) is the number of comparisons in which s participated but neither system was preferred. Recently, (Bojar et al., 2011) questioned the adequacy of this heuristic through the following ar1416 gument. Consider a competition with systems A and B. Suppose that the systems are different but equally good, such that one third of the time A is judged better than B, one third of the time B is judged better than A, and one third of the time they are judged to be equal. The expected values of ORIGWMT(A) and ORIGWMT(B) are both 2/3, so the heuristic accurately judges the systems to be equivalently good. Suppose however that we had duplicated B and had submitted it to the competition a second time as system C. Since B and C produce identical translations, they should always tie with one another. The expected value of ORIGWMT(A) would not change, but the expected value of ORIGWMT(B) would increase to 5/6, buoyed by its ties with system C. This vulnerability prompted (Bojar et al., 2011) to offer the following revision: BOJAR(s) = win(s) win(s) + loss(s) The following year, it was BOJAR’s turn to be criticized, this time by (Lopez, 2012): Superficially, this appears to be an improvement....couldn’t a system still be penalized simply by being compared to [good systems] more frequently than its competitors? On the other hand, couldn’t a system be rewarded simply by being compared against a bad system more frequently than its competitors? Lopez’s concern, while reasonable, is less obviously damning than (Bojar et al., 2011)’s criticism of ORIGWMT. It depends on whether the collected set of comparisons is small enough or biased enough to make the variance in competition significant. While this hypothesis is plausible, Lopez makes no attempt to verify it. Instead, he offers a ranking heuristic of his own, based on a Minimum Feedback Arc solver. The proliferation of ranking heuristics continued from there. The WMT 2012 organizers (Callison-Burch et al., 2012) took Lopez’s ranking scheme and provided a variant called Most Probable Ranking. Then, noting some potential pitfalls with that, they created two more, called Monte Carlo Playoffs and Expected Wins. While one could raise philosophical objections about each of these, where would it end? Ultimately, the WMT 2012 findings presented five different rankings for the English-German competition track, with no guidance about which ranking we should pay attention to. How can we know whether one ranking is better than other? Or is this even the right question to ask? 3 A Problem with Rankings Suppose four systems participate in a translation competition. Three of these systems are extremely close in quality. We’ll call these close1, close2, and close3. Nevertheless, close1 is very slightly better3 than close2, and close2 is very slightly better than close3. The fourth system, called terrific, is a really terrific system that far exceeds the other three. Now which is the better ranking? terrific, close3, close1, close2 (1) close1, terrific, close2, close3 (2) Spearman’s rho4 would favor the second ranking, since it is a less disruptive permutation of the gold ranking. But intuition favors the first. While its mistakes are minor, the second ranking makes the hard-to-forgive mistake of placing close1 ahead of the terrific system. The problem is not with Spearman’s rho. The problem is the disconnnect between the knowledge that we want a ranking to reflect and the knowledge that a ranking actually contains. Without this additional knowledge, we cannot determine whether one ranking is better than another, even if we know the gold ranking. We need to determine what information they lack, and define more rigorously what we hope to learn from a translation competition. 4 From Rankings to Relative Ability Ostensibly the purpose of a translation competition is to determine the relative ability of a set of translation systems. Let S be the space of all translation systems. Hereafter, we will refer to S as the space of students. We choose this term to evoke the metaphor of a translation competition as a standardized test, which shares the same goal: to assess the relative abilities of a set of participants. But what exactly do we mean by “ability”? Before formally defining this term, first recognize that it means little without context, namely: 3What does “better” mean? We’ll return to this question. 4Or Pearson’s correlation coefficient. 1417 1. What kind of source text do we want the systems to translate well? Say system A is great at translating travel-related documents, but terrible at translating newswire. Meanwhile, system B is pretty good at both. The question “which system is better?” requires us to state how much we care about travel versus newswire documents – otherwise the question is underspecified. 2. Who are we trying to impress? While it’s tempting to think that translation quality is a universal notion, the 50-60% interannotator agreement in WMT evaluations (CallisonBurch et al., 2012) suggests otherwise. It’s also easy to imagine reasons why one group of judges might have different priorities than another. Think a Fortune 500 company versus web forum users. Lawyers versus laymen. Non-native versus native speakers. Posteditors versus Google Translate users. Different groups have different uses for translation, and therefore different definitions of what “better” means. With this in mind, let’s define some additional elements of a translation competition. Let X be the space of all possible segments of source text, J be the space of all possible judges, and Π = {0, 1, 2} be the space of pairwise preferences.5 We assume all spaces are countable. Unless stated otherwise, variables s1 and s2 represent students from S, variable x represents a segment from X, variable j represents a judge from J , and variable π represents a preference from Π. Moreover, define the negation ˆπ of preference π such that ˆπ = 2 (if π = 1), ˆπ = 1 (if π = 2), and ˆπ = 0 (if π = 0). Now assume a joint distribution P(s1, s2, x, j, π) specifying the probability that we ask judge j to evaluate students s1 and s2’s respective translations of source text x, and that judge j’s preference is π. We will further assume that the choice of student pair, source text, and judge are marginally independent of one another. In other words: P(s1, s2, x, j, π) = P(π|s1, s2, x, j) · P(x|s1, s2, j) ·P(j|s1, s2) · P(s1, s2) = P(π|s1, s2, x, j) · P(x) · P(j) · P(s1, s2) = PX (x) · PJ (j) · P(s1, s2) · P(π|s1, s2, x, j) 5As a reminder, 0 indicates no preference. It will be useful to reserve notation PX and PJ for the marginal distributions over source text and judges. We can marginalize over the source segments and judges to obtain a useful quantity: P(π|s1, s2) = X x∈X X j∈J PX (x) · PJ (j) · P(π|s1, s2, x, j) We refer to this as the ⟨PX , PJ ⟩-relative ability of students s1 and s2. By using different marginal distributions PX , we can specify what kinds of source text interest us (for instance, PX could focus most of its probability mass on German tweets). Similarly, by using different marginal distributions PJ , we can specify what judges we want to impress (for instance, PJ could focus all of its mass on one important corporate customer or evenly among all fluent bilingual speakers of a language pair). With this machinery, we can express the purpose of a translation competition more clearly: to estimate the ⟨PX , PJ ⟩-relative ability of a set of students. In the case of WMT, PJ presumably6 defines a space of competent source-totarget bilingual speakers, while PX defines a space of newswire documents. We’ll refer to an estimate of P(π|s1, s2) as a preference model. In other words, a preference model is a distribution Q(π|s1, s2). Given a set of pairwise comparisons (e.g., Table 2), the challenge is to estimate a preference model Q(π|s1, s2) such that Q is “close” to P. For measuring distributional proximity, a natural choice is KL-divergence (Kullback and Leibler, 1951), but we cannot use it here because P is unknown. Fortunately, if we have i.i.d. data drawn from P, then we can do the next best thing and compute the perplexity of preference model Q on this heldout test data. Let D be a sequence of triples ⟨s1, s2, π⟩ where the preferences π are i.i.d. samples from P(π|s1, s2). The perplexity of preference model Q on test data D is: perplexity(Q|D) = 2−P ⟨s1,s2,π⟩∈D 1 |D| log2 Q(π|s1,s2) How do we obtain such a test set from competition data? Recall that a WMT competition produces pairwise comparisons like those in Table 2. 6One could argue that it specifies a space of machine translation specialists, but likely these individuals are thought to be a representative sample of a broader community. 1418 Let C be the set of comparisons ⟨s1, s2, x, j, π⟩ obtained from a translation competition. Competition data C is not necessarily7 sampled i.i.d. from P(s1, s2, x, j, π) because we may intentionally8 bias data collection towards certain students, judges or source text. Also, because WMT elicits its data in batches (see Table 1), every segment x of source text appears in at least ten comparisons. To create an appropriately-sized test set that closely resembles i.i.d. data, we isolate the subset C′ of comparisons whose source text appears in at most k comparisons, where k is the smallest positive integer such that |C′| >= 2000. We then create the test set D from C′: D = {⟨s1, s2, π⟩|⟨s1, s2, x, j, π⟩∈C′} We reserve the remaining comparisons for training preference models. Table 3 shows the resulting dataset sizes for each competition track. Unlike with raw rankings, the claim that one preference model is better than another has testable implications. Given two competing models, we can train them on the same comparisons, and compare their perplexities on the test set. This gives us a quantitative9 answer to the question of which is the better model. We can then publish a system ranking based on the most trustworthy preference model. 5 Baselines Let’s begin then, and create some simple preference models to serve as baselines. 5.1 Uniform The simplest preference model is a uniform distribution over preferences, for any choice of students s1, s2: Q(π|s1, s2) = 1 3 ∀π ∈Π This will be our only model that does not require training data, and its perplexity on any test set will be 3 (i.e. equal to number of possible preferences). 5.2 Adjusted Uniform Now suppose we have a set C of comparisons available for training. Let Cπ ⊆C denote the subset of comparisons with preference π, and let 7In WMT, it certainly is not. 8To collect judge agreement statistics, for instance. 9As opposed to philosophical. C(s1, s2) denote the subset comparing students s1 and s2. Perhaps the simplest thing we can do with the training data is to estimate the probability of ties (i.e. preference 0). We can then distribute the remaining probability mass uniformly among the other two preferences: Q(π|s1, s2) =          |C0| |C| if π = 0 1 −|C0| |C| 2 otherwise 6 Simple Bayesian Models 6.1 Independent Pairs Another simple model is the direct estimation of each relative ability P(π|s1, s2) independently. In other words, for each pair of students s1 and s2, we estimate a separate preference distribution. The maximum likelihood estimate of each distribution would be: Q(π|s1, s2) = |Cπ(s1, s2)| + |Cˆπ(s2, s1)| |C(s1, s2)| + |C(s2, s1)| However the maximum likelihood estimate would test poorly, since any zero probability estimates for test set preferences would result in infinite perplexity. To make this model practical, we assume a symmetric Dirichlet prior with strength α for each preference distribution. This gives us the following Bayesian estimate: Q(π|s1, s2) = α + |Cπ(s1, s2)| + |Cˆπ(s2, s1)| 3α + |C(s1, s2)| + |C(s2, s1)| We call this the Independent Pairs preference model. 6.2 Independent Students The Independent Pairs model makes a strong independence assumption. It assumes that even if we know that student A is much better than student B, and that student B is much better than student C, we can infer nothing about how student A will fare versus student C. Instead of directly estimating the relative ability P(π|s1, s2) of students s1 and s2, we could instead try to estimate the universal ability P(π|s1) = P s2∈S P(π|s1, s2) · P(s2|s1) of each individual student s1 and then try to reconstruct the relative abilities from these estimates. For the same reasons as before, we assume a symmetric Dirichlet prior with strength α for each 1419 preference distribution, which gives us the following Bayesian estimate: Q(π|s1) = α + P s2∈S |Cπ(s1, s2)| + |Cˆπ(s2, s1)| 3α + P s2∈S |C(s1, s2)| + |C(s2, s1)| The estimates Q(π|s1) do not yet constitute a preference model. A downside of this approach is that there is no principled way to reconstruct a preference model from the universal ability estimates. We experiment with three ad-hoc reconstructions. The asymmetric reconstruction simply ignores any information we have about student s2: Q(π|s1, s2) = Q(π|s1) The arithmetic and geometric reconstructions compute an arithmetic/geometric average of the two universal abilities: Q(π|s1, s2) = Q(π|s1) + Q(ˆπ|s2) 2 Q(π|s1, s2) = [Q(π|s1) ∗Q(ˆπ|s2)] 1 2 We respectively call these the (Asymmetric/Arithmetic/Geometric) Independent Students preference models. Notice the similarities between the universal ability estimates Q(π|s1) and the BOJAR ranking heuristic. These three models are our attempt to render the BOJAR heuristic as preference models. 7 Item-Response Theoretic (IRT) Models Let’s revisit (Lopez, 2012)’s objection to the BOJAR ranking heuristic: “...couldn’t a system still be penalized simply by being compared to [good systems] more frequently than its competitors?” The official WMT 2012 findings (Callison-Burch et al., 2012) echoes this concern in justifying the exclusion of reference translations from the 2012 competition: [W]orkers have a very clear preference for reference translations, so including them unduly penalized systems that, through (un)luck of the draw, were pitted against the references more often. Presuming the students are paired uniformly at random, this issue diminishes as more comparisons are elicited. But preference elicitation is expensive, so it makes sense to assess the relative ability of the students with as few elicitations as possible. Still, WMT 2012’s decision to eliminate references entirely is a bit of a draconian measure, a treatment of the symptom rather than the (perceived) disease. If our models cannot function in the presence of training data variation, then we should change the models, not the data. A model that only works when the students are all about the same level is not one we should rely on. We experiment with a simple model that relaxes some independence assumptions made by previous models, in order to allow training data variation (e.g. who a student has been paired with) to influence the estimation of the student abilities. Figure 1(left) shows plate notation (Koller and Friedman, 2009) for the model’s independence structure. First, each student’s ability distribution is drawn from a common prior distribution. Then a number of translation items are generated. Each item is authored by a student and has a quality drawn from the student’s ability distribution. Then a number of pairwise comparisons are generated. Each comparison has two options, each a translation item. The quality of each item is observed by a judge (possibly noisily) and then the judge states a preference by comparing the two observations. We investigate two parameterizations of this model: Gaussian and categorical. Figure 1(right) shows an example of the Gaussian parameterization. The student ability distributions are Gaussians with a known standard deviation σa, drawn from a zero-mean Gaussian prior with known standard deviation σ0. In the example, we show the ability distributions for students 6 (an aboveaverage student, whose mean is 0.4) and 14 (a poor student, whose mean is -0.6). We also show an item authored by each student. Item 43 has a somewhat low quality of -0.3 (drawn from student 14’s ability distribution), while item 205 is not student 6’s best work (he produces a mean quality of 0.4), but still has a decent quality at 0.2. Comparison 1 pits these items against one another. A judge draws noise from a zero-mean Gaussian with known standard deviation σobs, then adds this to the item’s actual quality to get an observed quality. For the first option (item 43), the judge draws a noise of -0.12 to observe a quality of -0.42 (worse than it actually is). For the second option (item 205), the judge draws a noise of 0.15 to observe a quality of 0.35 (better than it actually is). Finally, the judge compares the two observed qualities. If the absolute difference is lower than his decision 1420 student.6.ability Gauss(0.4, σa) item.43.author 14 item.43.quality -0.3 comp.1.opt1 43 comp.1.opt1.obs -0.42 comp.1.pref 2 comp.1.opt2 205 comp.1.opt2.obs 0.35 student.prior Gauss(0.0, σ0) decision.radius 0.5 obs.parameters Gauss(0.0, σobs) item.205.author 6 item.205.quality 0.2 student.14.ability Gauss(-0.6, σa) student.s.ability item.i.author item.i.quality comp.c.opt1 comp.c.opt1.obs comp.c.pref comp.c.opt2 comp.c.opt2.obs S I C student.prior decision.radius obs.parameters Figure 1: Plate notation (left) showing the independence structure of the IRT Models. Example instantiated subnetwork (right) for the Gaussian parameterization. Shaded rectangles are hyperparameters. Shaded ellipses are variables observable from a set of comparisons. radius (which here is 0.5), then he states no preference (i.e. a preference of 0). Otherwise he prefers the item with the higher observed quality. The categorical parameterization is similar to the Gaussian parameterization, with the following differences. Item quality is not continuous, but rather a member of the discrete set {1, 2, ..., Λ}. The student ability distributions are categorical distributions over {1, 2, ..., Λ}, and the student ability prior is a symmetric Dirichlet with strength αa. Finally, the observed quality is the item quality λ plus an integer-valued noise ν ∈{1 − λ, ..., Λ −λ}. Noise ν is drawn from a discretized zero-mean Gaussian with standard deviation σobs. Specifically, Pr(ν) is proportional to the value of the probability density function of the zero-mean Gaussian N(0, σobs). We estimated the model parameters with Gibbs sampling (Geman and Geman, 1984). We found that Gibbs sampling converged quickly and consistently10 for both parameterizations. Given the parameter estimates, we obtain a preference model Q(π|s1, s2) through the inference query: Pr(comp.c′.pref = π | item.i′.author = s1, item.i′′.author = s2, comp.c′.opt1 = i′, comp.c′.opt2 = i′′) 10We ran 200 iterations with a burn-in of 50. where c′, i′, i′′ are new comparison and item ids that do not appear in the training data. We call these models Item-Response Theoretic (IRT) models, to acknowledge their roots in the psychometrics (Thurstone, 1927; Bradley and Terry, 1952; Luce, 1959) and item-response theory (Hambleton, 1991; van der Linden and Hambleton, 1996; Baker, 2001) literature. Itemresponse theory is the basis of modern testing theory and drives adaptive standardized tests like the Graduate Record Exam (GRE). In particular, the Gaussian parameterization of our IRT models strongly resembles11 the Thurstone (Thurstone, 1927) and Bradley-Terry-Luce (Bradley and Terry, 1952; Luce, 1959) models of paired comparison and the 1PL normal-ogive and Rasch (Rasch, 1960) models of student testing. From the testing perspective, we can view each comparison as two students simultaneously posing a test question to the other: “Give me a translation of the source text which is better than mine.” The students can answer the question correctly, incorrectly, or they can provide a translation of analogous quality. An extra dimension of our models is judge noise, not a factor when modeling multiple-choice tests, for which the right answer is not subject to opinion. 11These models are not traditionally expressed using graphical models, although it is not unprecedented (Mislevy and Almond, 1997; Mislevy et al., 1999). 1421 wmt10 wmt11 wmt12 lp train test train test train test ce 3166 2209 1706 3216 5969 6806 fe 5918 2376 2556 4430 7982 5840 ge 7422 3002 3708 5371 8106 6032 se 8411 2896 1968 3684 3910 7376 ec 10490 3048 8859 9016 13770 9112 ef 5720 2242 3328 5758 7841 7508 eg 10852 2842 5964 7032 10210 7191 es 2962 2212 4768 6362 5664 8928 Table 3: Dataset sizes for each competition track (number of comparisons). Figure 2: WMT10 model perplexities. The perplexity of the uniform preference model is 3.0 for all training sizes. 8 Experiments We organized the competition data as described at the end of Section 4. To compare the preference models, we did the following: • Randomly chose a subset of k comparisons from the training set, for k ∈ {100, 200, 400, 800, 1600, 3200}.12 • Trained the preference model on these comparisons. • Evaluated the perplexity of the trained model on the test preferences, as described in Section 4. For each model and training size, we averaged the perplexities from 5 trials of each competition track. We then plotted average perplexity as a function of training size. These graphs are shown 12If k was greater than the total number of training comparisons, then we took the entire set. Figure 3: WMT11 model perplexities. Figure 4: WMT12 model perplexities. in Figure 2 (WMT10)13, and Figure 4 (WMT12). For WMT10 and WMT11, the best models were the IRT models, with the Gaussian parameterization converging the most rapidly and reaching the lowest perplexity. For WMT12, in which reference translations were excluded from the competition, four models were nearly indistinguishable: the two IRT models and the two averaged Independent Student models. This somewhat validates the organizers’ decision to exclude the references, particularly given WMT’s use of the BOJAR ranking heuristic (the nucleus of the Independent Student models) for its official rankings. 13Results for WMT10 exclude the German-English and English-German tracks, since we used these to tune our model hyperparameters. These were set as follows. The Dirichlet strength for each baseline was 1. For IRT-Gaussian: σ0 = 1.0, σobs = 1.0, σa = 0.5, and the decision radius was 0.4. For IRT-Categorical: Λ = 8, σobs = 1.0, αa = 0.5, and the decision radius was 0. 1422 Figure 6: English-Czech WMT11 results (average of 5 trainings on 1600 comparisons). Error bars (left) indicate one stddev of the estimated ability means. In the heatmap (right), cell (s1, s2) is darker if preference model Q(π|s1, s2) skews in favor of student s1, lighter if it skews in favor of student s2. Figure 5: WMT10 model perplexities (crowdsourced versus expert training). The IRT models proved the most robust at handling judge noise. We repeated the WMT10 experiment using the same test sets, but using the unfiltered crowdsourced comparisons (rather than “expert”14 comparisons) for training. Figure 5 shows the results. Whereas the crowdsourced noise considerably degraded the Geometric Independent Students model, the IRT models were remarkably robust. IRT-Gaussian in particular came close to replicating the performance of Geometric Independent Students trained on the much cleaner expert data. This is rather impressive, since the crowdsourced judges agree only 46.6% of the time, compared to a 65.8% agreement rate among 14I.e., machine translation specialists. expert judges (Callison-Burch et al., 2010). Another nice property of the IRT models is that they explicitly model student ability, so they yield a natural ranking. For training size 1600 of the WMT11 English-Czech track, Figure 6 (left) shows the mean student abilities learned by the IRT-Gaussian model. The error bars show one standard deviation of the ability means (recall that we performed 5 trials, each with a random training subset of size 1600). These results provide further insight into a case analyzed by (Lopez, 2012), which raised concern about the relative ordering of online-B, cu-bojar, and cu-marecek. According to IRT-Gaussian’s analysis of the data, these three students are so close in ability that any ordering is essentially arbitrary. Short of a full ranking, the analysis does suggest four strata. Viewing one of IRT-Gaussian’s induced preference models as a heatmap15 (Figure 6, right), four bands are discernable. First, the reference sentences are clearly the darkest (best). Next come students 2-7, followed by the slightly lighter (weaker) students 810, followed by the lightest (weakest) student 11. 9 Conclusion WMT has faced a crisis of confidence lately, with researchers raising (real and conjectured) issues with its analytical methodology. In this paper, we showed how WMT can restore confidence in 15In the heatmap, cell (s1, s2) is darker if preference model Q(π|s1, s2) skews in favor of student s1, lighter if it skews in favor of student s2. 1423 its conclusions – by shifting the focus from rankings to relative ability. Estimates of relative ability (the expected head-to-head performance of system pairs over a probability space of judges and source text) can be empirically compared, granting substance to previously nebulous questions like: 1. Is my analysis better than your analysis? Rather than the current anecdotal approach to comparing competition analyses (e.g. presenting example rankings that seem somehow wrong), we can empirically compare the predictive power of the models on test data. 2. How much of an impact does judge noise have on my conclusions? We showed that judge noise can have a significant impact on the quality of our conclusions, if we use the wrong models. However, the IRTGaussian appears to be quite noise-tolerant, giving similar-quality conclusions on both expert and crowdsourced comparisons. 3. How many comparisons should I elicit? Many of our preference models (including IRT-Gaussian and Geometric Independent Students) are close to convergence at around 1000 comparisons. This suggests that we can elicit far fewer comparisons and still derive confident conclusions. This is the first time a concrete answer to this question has been provided. References F.B. Baker. 2001. The basics of item response theory. ERIC. Ondej Bojar, Miloˇs Ercegovˇcevi´c, Martin Popel, and Omar Zaidan. 2011. A grain of salt for the wmt manual evaluation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 1–11, Edinburgh, Scotland, July. Association for Computational Linguistics. Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324– 345. C. Callison-Burch, P. Koehn, C. Monz, K. Peterson, M. Przybocki, and O.F. Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 17– 53. Association for Computational Linguistics. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation. S. Geman and D. Geman. 1984. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721–741. R.K. Hambleton. 1991. Fundamentals of item response theory, volume 2. Sage Publications, Incorporated. D. Koller and N. Friedman. 2009. Probabilistic graphical models: principles and techniques. MIT press. S. Kullback and R.A. Leibler. 1951. On information and sufficiency. The Annals of Mathematical Statistics, 22(1):79–86. Adam Lopez. 2012. Putting human assessments of machine translation systems in order. In Proceedings of WMT. R. Ducan Luce. 1959. Individual Choice Behavior a Theoretical Analysis. John Wiley and sons. R.J. Mislevy and R.G. Almond. 1997. Graphical models and computerized adaptive testing. UCLA CSE Technical Report 434. R.J. Mislevy, R.G. Almond, D. Yan, and L.S. Steinberg. 1999. Bayes nets in educational assessment: Where the numbers come from. In Proceedings of the fifteenth conference on uncertainty in artificial intelligence, pages 437–446. Morgan Kaufmann Publishers Inc. G. Rasch. 1960. Studies in mathematical psychology: I. probabilistic models for some intelligence and attainment tests. Louis L Thurstone. 1927. A law of comparative judgment. Psychological review, 34(4):273–286. W.J. van der Linden and R.K. Hambleton. 1996. Handbook of modern item response theory. Springer. 1424
2013
139
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 135–144, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy Francesco Sartorio Department of Information Engineering University of Padua, Italy [email protected] Giorgio Satta Department of Information Engineering University of Padua, Italy [email protected] Joakim Nivre Department of Linguistics and Philology Uppsala University, Sweden [email protected] Abstract We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attachment score over an arc-eager parser, with a slow-down factor of 2.8. 1 Introduction Dependency-based methods for syntactic parsing have become increasingly popular during the last decade or so. This development is probably due to many factors, such as the increased availability of dependency treebanks and the perceived usefulness of dependency structures as an interface to downstream applications, but a very important reason is also the high efficiency offered by dependency parsers, enabling web-scale parsing with high throughput. The most efficient parsers are greedy transition-based parsers, which only explore a single derivation for each input and relies on a locally trained classifier for predicting the next parser action given a compact representation of the derivation history, as pioneered by Yamada and Matsumoto (2003), Nivre (2003), Attardi (2006), and others. However, while these parsers are capable of processing tens of thousands of tokens per second with the right choice of classifiers, they are also known to perform slightly below the state-ofthe-art because of search errors and subsequent error propagation (McDonald and Nivre, 2007), and recent research on transition-based dependency parsing has therefore explored different ways of improving their accuracy. The most common approach is to use beam search instead of greedy decoding, in combination with a globally trained model that tries to minimize the loss over the entire sentence instead of a locally trained classifier that tries to maximize the accuracy of single decisions (given no previous errors), as first proposed by Zhang and Clark (2008). With these methods, transition-based parsers have reached state-of-the-art accuracy for a number of languages (Zhang and Nivre, 2011; Bohnet and Nivre, 2012). However, the drawback with this approach is that parsing speed is proportional to the size of the beam, which means that the most accurate transition-based parsers are not nearly as fast as the original greedy transition-based parsers. Another line of research tries to retain the efficiency of greedy classifier-based parsing by instead improving the way in which classifiers are learned from data. While the classical approach limits training data to parser states that result from oracle predictions (derived from a treebank), these novel approaches allow the classifier to explore states that result from its own (sometimes erroneous) predictions (Choi and Palmer, 2011; Goldberg and Nivre, 2012). In this paper, we explore an orthogonal approach to improving the accuracy of transition-based parsers, without sacrificing their advantage in efficiency, by introducing a new type of transition system. While all previous transition systems assume a static parsing strategy with respect to top-down and bottom-up processing, our new system allows a dynamic strategy for ordering parsing decisions. This has the advantage that the parser can postpone difficult decisions until the relevant information becomes available, in a way that is not possible in existing transition systems. A second advantage of dynamic parsing is that we can extend the feature inventory of previous systems. Our experiments show that these advantages lead to significant improvements in parsing accuracy, compared to a baseline parser that uses the arc-eager transition system of Nivre (2003), which is one of the most 135 widely used static transition systems. 2 Static vs. Dynamic Parsing The notions of bottom-up and top-down parsing strategies do not have a general mathematical definition; they are instead specified, often only informally, for individual families of grammar formalisms. In the context of dependency parsing, a parsing strategy is called purely bottom-up if every dependency h →d is constructed only after all dependencies of the form d →i have been constructed. Here h →d denotes a dependency with h the head node and d the dependent node. In contrast, a parsing strategy is called purely top-down if h →d is constructed before any dependency of the form d →i. If we consider transition-based dependency parsing (Nivre, 2008), the purely bottom-up strategy is implemented by the arc-standard model of Nivre (2004). After building a dependency h →d, this model immediately removes from its stack node d, preventing further attachment of dependents to this node. A second popular parser, the arc-eager model of Nivre (2003), instead adopts a mixed strategy. In this model, a dependency h →d is constructed using a purely bottom-up strategy if it represents a left-arc, that is, if the dependent d is placed to the left of the head h in the input string. In contrast, if h →d represents a right-arc (defined symmetrically), then this dependency is constructed before any right-arc d →i (top-down) but after any leftarc d →i (bottom-up). What is important to notice about the above transition-based parsers is that the adopted parsing strategies are static. By this we mean that each dependency is constructed according to some fixed criterion, depending on structural conditions such as the fact that the dependency represents a left or a right arc. This should be contrasted with dynamic parsing strategies in which several parsing options are simultaneously available for the dependencies being constructed. In the context of left-to-right, transition-based parsers, dynamic strategies are attractive for several reasons. One argument is related to the wellknown PP-attachment problem, illustrated in Figure 1. Here we have to choose whether to attach node P as a dependent of V (arc α2) or else as a dependent of N1 (arc α3). The purely bottomup arc-standard model has to take a decision as soon as N1 is placed into the stack. This is so V N1 P N2 α1 α2 α3 α4 Figure 1: PP-attachment example, with dashed arcs identifying two alternative choices. because the construction of α1 excludes α3 from the search space, while the alternative decision of shifting P into the stack excludes α2. This is bad, because the information about the correct attachment could come from the lexical content of node P. The arc-eager model performs slightly better, since it can delay the decision up to the point in which α1 has been constructed and P is read from the buffer. However, at this point it must make a commitment and either construct α3 or pop N1 from the stack (implicitly committing to α2) before N2 is read from the buffer. In contrast with this scenario, in the next sections we implement a dynamic parsing strategy that allows a transition system to decide between the attachments α2 and α3 after it has seen all of the four nodes V, N1, P and N2. Other additional advantages of dynamic parsing strategies with respect to static strategies are related to the increase in the feature inventory that we apply to parser states, and to the increase of spurious ambiguity. However, these arguments are more technical than the PP-attachment argument above, and will be discussed later. 3 Dependency Parser In this section we present a novel transition-based parser for projective dependency trees, implementing a dynamic parsing strategy. 3.1 Preliminaries For non-negative integers i and j with i ≤j, we write [i, j] to denote the set {i, i+1, . . . , j}. When i > j, [i, j] is the empty set. We represent an input sentence as a string w = w0 · · · wn, n ≥1, where token w0 is a special root symbol and, for each i ∈[1, n], token wi = (i, ai, ti) encodes a lexical element ai and a part-ofspeech tag ti associated with the i-th word in the sentence. A dependency tree for w is a directed, ordered tree Tw = (Vw, Aw), where Vw = {wi | i ∈ 136 w4 w2 w5 w7 w1 w3 w6 Figure 2: A dependency tree with left spine ⟨w4, w2, w1⟩and right spine ⟨w4, w7⟩. [0, n]} is the set of nodes, and Aw ⊆Vw × Vw is the set of arcs. Arc (wi, wj) encodes a dependency wi →wj. A sample dependency tree (excluding w0) is displayed in Figure 2. If (wi, wj) ∈Aw for j < i, we say that wj is a left child of wi; a right child is defined in a symmetrical way. The left spine of Tw is an ordered sequence ⟨u1, . . . , up⟩with p ≥1 and ui ∈Vw for i ∈[1, p], consisting of all nodes in a descending path from the root of Tw taking the leftmost child node at each step. More formally, u1 is the root node of Tw and ui is the leftmost child of ui−1, for i ∈[2, p]. The right spine of Tw is defined symmetrically; see again Figure 2. Note that the left and the right spines share the root node and no other node. 3.2 Basic Idea Transition-based dependency parsers use a stack data structure, where each stack element is associated with a tree spanning some (contiguous) substring of the input w. The parser can combine two trees T and T ′ through attachment operations, called left-arc or right-arc, under the condition that T and T ′ appear at the two topmost positions in the stack. Crucially, only the roots of T and T ′ are available for attachment; see Figure 3(a). In contrast, a stack element in our parser records the entire left spine and right spine of the associated tree. This allows us to extend the inventory of the attachment operations of the parser by including the attachment of tree T as a dependent of any node in the left or in the right spine of a second tree T ′, provided that this does not violate projectivity.1 See Figure 3(b) for an example. The new parser implements a mix of bottom-up and top-down strategies, since after any of the attachments in Figure 3(b) is performed, additional dependencies can still be created for the root of T. Furthermore, the new parsing strategy is clearly dy1A dependency tree for w is projective if every subtree has a contiguous yield in w. T T ′ T T ′ (a) (b) Figure 3: Left-arc attachment of T to T ′ in case of (a) standard transition-based parsers and (b) our parser. namic, due to the free choice in the timing for these attachments. The new strategy is more powerful than the strategy of the arc-eager model, since we can use top-down parsing at left arcs, which is not allowed in arc-eager parsing, and we do not have the restrictions of parsing right arcs (h →d) before the attachment of right dependents at node d. To conclude this section, let us resume our discussion of the PP-attachment example in Figure 1. We observe that the new parsing strategy allows the construction of a tree T ′ consisting of the only dependency V →N1 and a tree T, placed at the right of T ′, consisting of the only dependency P →N2. Since the right spine of T ′ consists of nodes V and N1, we can freely choose between attachment V →P and attachment N1 →P. Note that this is done after we have seen node N2, as desired. 3.3 Transition-based Parser We assume the reader is familiar with the formal framework of transition-based dependency parsing originally introduced by Nivre (2003); see Nivre (2008) for an introduction. To keep the notation at a simple level, we only discuss here the unlabeled version of our parser; however, a labeled extension is used in §5 for our experiments. Our transition-based parser uses a stack data structure to store partial parses for the input string w. We represent the stack as an ordered sequence σ = [σd, . . . , σ1], d ≥0, of stack elements, with the topmost element placed at the right. When d = 0, we have the empty stack σ = []. Sometimes we use the vertical bar to denote the append operator for σ, and write σ = σ′|σ1 to indicate that σ1 is the topmost element of σ. A stack element is a pair σk = (⟨uk,1, . . . , uk,p⟩, ⟨vk,1, . . . , vk,q⟩) where the ordered sequences ⟨uk,1, . . . , uk,p⟩and 137 ⟨vk,1, . . . , vk,q⟩are the left and the right spines, respectively, of the tree associated with σk. Recall that uk,1 = vk,1, since the root node of the associated tree is shared by the two spines. The parser also uses a buffer to store the portion of the input string still to be processed. We represent the buffer as an ordered sequence β = [wi, . . . , wn], i ≥0, of tokens from w, with the first element placed at the left. Note that β always represents a (non-necessarily proper) suffix of w. When i > n, we have the empty buffer β = []. Sometimes we use the vertical bar to denote the append operator for β, and write β = wi|β′ to indicate that wi is the first token of β; consequently, we have β′ = [wi+1, . . . , wn]. When processing w, the parser reaches several states, technically called configurations. A configuration of the parser relative to w is a triple c = (σ, β, A), where σ and β are a stack and a buffer, respectively, and A ⊆Vw × Vw is a set of arcs. The initial configuration for w is ([], [w0, . . . , wn], ∅). The set of terminal configurations consists of all configurations of the form ([σ1], [], A), where σ1 is associated with a tree having root w0, that is, u1,1 = v1,1 = w0, and A is any set of arcs. The core of a transition-based parser is the set of its transitions. Each transition is a binary relation defined over the set of configurations of the parser. Since the set of configurations is infinite, a transition is infinite as well, when viewed as a set. However, transitions can always be specified by some finite means. Our parser uses three types of transitions, defined in what follows. • SHIFT, or sh for short. This transition removes the first node from the buffer and pushes into the stack a new element, consisting of the left and right spines of the associated tree. More formally (σ, wi|β, A) ⊢sh (σ|(⟨wi⟩, ⟨wi⟩), β, A) • LEFT-ARCk, k ≥1, or lak for short. Let h be the k-th node in the left spine of the topmost tree in the stack, and let d be the root node of the second topmost tree in the stack. This transition creates a new arc h →d. Furthermore, the two topmost stack elements are replaced by a new element associated with the tree resulting from the h →d attachment. The transition does not advance with the reading of the buffer. More formally (σ′|σ2|σ1, β, A) ⊢lak (σ′|σla, β, A ∪{h →d}) where σ1 = (⟨u1,1, . . . , u1,p⟩, ⟨v1,1, . . . , v1,q⟩) , σ2 = (⟨u2,1, . . . , u2,r⟩, ⟨v2,1, . . . , v2,s⟩) , σla = (⟨u1,1, . . . , u1,k, u2,1, . . . , u2,r⟩, ⟨v1,1, . . . , v1,q⟩) , and where we have set h = u1,k and d = u2,1. • RIGHT-ARCk, k ≥1, or rak for short. This transition is defined symmetrically with respect to lak. We have (σ′|σ2|σ1, β, A) ⊢rak (σ′|σra, β, A ∪{h →d}) where σ1 and σ2 are as in the lak case, σra = (⟨u2,1, . . . , u2,r⟩, ⟨v2,1, . . . , v2,k, v1,1, . . . , v1,q⟩) , and we have set h = v2,k and d = v1,1. Transitions lak and rak are parametric in k, where k is bounded by the length of the input string and not by a fixed constant (but see also the experimental findings in §5). Thus our system uses an unbounded number of transition relations, which has an apparent disadvantage for learning algorithms. We will get back to this problem in §4.3. A complete computation relative to w is a sequence of configurations c1, c2, . . . , ct, t ≥1, such that c1 and ct are initial and final configurations, respectively, and for each i ∈[2, t], ci is produced by the application of some transition to ci−1. It is not difficult to see that the transition-based parser specified above is sound, meaning that the set of arcs constructed in any complete computation on w is always a dependency tree for w. The parser is also complete, meaning that every (projective) dependency tree for w is constructed by some complete computation on w. A mathematical proof of this statement is beyond the scope of this paper, and will not be provided here. 3.4 Deterministic Parsing Algorithm The transition-based parser of the previous section is a nondeterministic device, since several transitions can be applied to a given configuration. This might result in several complete computations 138 Algorithm 1 Parsing Algorithm Input: string w = w0 · · · wn, function score() Output: dependency tree Tw c = (σ, β, A) ←([], [w0, . . . , wn], ∅) while |σ| > 1 ∨|β| > 0 do while |σ| < 2 do update c with sh p ←length of left spine of σ1 s ←length of right spine of σ2 T ←{lak | k ∈[1, p]} ∪ {rak | k ∈[1, s]} ∪{sh} bestT ←argmaxt∈T score(t, c) update c with bestT return Tw = (Vw, A) for w. We present here an algorithm that runs the parser in pseudo-deterministic mode, greedily choosing at each configuration the transition that maximizes some score function. Algorithm 1 takes as input a string w and a scoring function score() defined over parser transitions and parser configurations. The scoring function will be the subject of §4 and is not discussed here. The output of the parser is a dependency tree for w. At each iteration the algorithm checks whether there are at least two elements in the stack and, if this is not the case, it shifts elements from the buffer to the stack. Then the algorithm uses the function score() to evaluate all transitions that can be applied under the current configuration c = (σ, β, A), and it applies the transition with the highest score, updating the current configuration. To parse a sentence of length n (excluding the root token w0) the algorithm applies exactly 2n+1 transitions. In the worst case, each transition application involves 1 + p + s transition evaluations. We therefore conclude that the algorithm always reaches a configuration with an empty buffer and a stack which contains only one element. Then the algorithm stops, returning the dependency tree whose arc set is defined as in the current configuration. 4 Model and Training In this section we introduce the adopted learning algorithm and discuss the model parameters. 4.1 Learning Algorithm We use a linear model for the score function in Algorithm 1, and define score(t, c) = ⃗ω · φ(t, c). Here ⃗ω is a weight vector and function φ provides Algorithm 2 Learning Algorithm Input: pair (w = w0 · · · wn, Ag), vector ⃗ω Output: vector ⃗ω c = (σ, β, A) ←([], [w0, . . . , wn], ∅) while |σ| > 1 ∨|β| > 0 do while |σ| < 2 do update c with SHIFT p ←length of left spine of σ1 s ←length of right spine of σ2 T ←{lak | k ∈[1, p]} ∪ {rak | k ∈[1, s]} ∪{sh} bestT ←argmaxt∈T score(t, c) bestCorrectT ← argmaxt∈T ∧isCorrect(t) score(t, c) if bestT ̸= bestCorrectT then ⃗ω ←⃗ω −φ(bestT, c) +φ(bestCorrectT, c) update c with bestCorrectT a feature vector representation for a transition t applying to a configuration c. The function φ will be discussed at length in §4.3. The vector ⃗ω is trained using the perceptron algorithm in combination with the averaging method to avoid overfitting; see Freund and Schapire (1999) and Collins and Duffy (2002) for details. The training data set consists of pairs (w, Ag), where w is a sentence and Ag is the set of arcs of the gold (desired) dependency tree for w. At training time, each pair (w, Ag) is processed using the learning algorithm described as Algorithm 2. The algorithm is based on the notions of correct and incorrect transitions, discussed at length in §4.2. Algorithm 2 parses w following Algorithm 1 and using the current ⃗ω, until the highest score selected transition bestT is incorrect according to Ag. When this happens, ⃗ω is updated by decreasing the weights of the features associated with the incorrect bestT and by increasing the weights of the features associated with the transition bestCorrectT having the highest score among all possible correct transitions. After each update, the learning algorithm resumes parsing from the current configuration by applying bestCorrectT, and moves on using the updated weights. 4.2 Correct and Incorrect Transitions Standard transition-based dependency parsers are trained by associating each gold tree with a canonical complete computation. This means that, for each configuration of interest, only one transition 139 σ2 σ1 b1 (a) σ2 σ1 b1 (b) σ2 σ1 · · · bi (c) σ2 σ1 · · · bi (d) Figure 4: Graphical representation of configurations; drawn arcs are in Ag but have not yet been added to the configuration. Transition sh is incorrect for configuration (a) and (b); sh and ra1 are correct for (c); sh and la1 are correct for (d). leading to the gold tree is considered as correct. In this paper we depart from such a methodology, and follow Goldberg and Nivre (2012) in allowing more than one correct transition for each configuration, as explained in detail below. Let (w, Ag) be a pair in the training set. In §3.3 we have mentioned that there is always a complete computation on w that results in the construction of the set Ag. In general, there might be more than one computation for Ag. This means that the parser shows spurious ambiguity. Observe that all complete computations for Ag share the same initial configuration cI,w and final configuration cF,Ag. Consider now the set C(w) of all configurations c that are reachable from cI,w, meaning that there exists a sequence of transitions that takes the parser from cI,w to c. A configuration c ∈C(w) is correct for Ag if cF,Ag is reachable from c; otherwise, c is incorrect for Ag. Let c ∈C(w) be a correct configuration for Ag. A transition t is correct for c and Ag if c ⊢t c′ and c′ is correct for Ag; otherwise, t is incorrect for c and Ag. The next lemma provides a characterization of correct and incorrect transitions; see Figure 4 for examples. We use this characterization in the implementation of predicate isCorrect() in Algorithm 2. Lemma 1 Let (w, Ag) be a pair in the training set and let c ∈C(w) with c = (σ, β, A) be a correct configuration for Ag. Let also v1,k, k ∈[1, q], be the nodes in the right spine of σ1. (i) lak and rak are incorrect for c and Ag if and only if they create a new arc (h →d) ̸∈Ag; (ii) sh is incorrect for c and Ag if and only if the following conditions are both satisfied: (a) there exists an arc (h →d) in Ag such that h is in σ and d = v1,1; (b) there is no arc (h′ →d′) in Ag with h′ = v1,k, k ∈[1, q], and d′ in β. 2 PROOF (SKETCH) To prove part (i) we focus on transition rak; a similar argument applies to lak. The ‘if’ statement in part (i) is self-evident. ‘Only if’. Assuming that transition rak creates a new arc (h →d) ∈Ag, we argue that from configuration c′ with c ⊢rak c′ we can still reach the final configuration associated with Ag. We have h = v2,k and d = u1,1. The tree fragments in σ with roots v2,k+1 and u1,1 must be adjacent siblings in the tree associated with Ag, since c is a correct configuration for Ag and (v2,k →u1,1) ∈Ag. This means that each of the nodes v2,k+1, . . . , v2,s in the right spine in σ2 in c must have already acquired all of its right dependents, since the tree is projective. Therefore it is safe for transition rak to eliminate the nodes v2,k+1, . . . , v2,s from the right spine in σ2. We now deal with part (ii). Let c ⊢sh c′, c′ = (σ′, β′, A). ‘If’. Assuming (ii)a and (ii)b, we argue that c′ is incorrect. Node d is the head of σ′ 2. Arc (h →d) is not in A, and the only way we could create (h →d) from c′ is by reaching a new configuration with d in the topmost stack symbol, which amounts to say that σ′ 1 can be reduced by a correct transition. Node h is in some σ′ i, i > 2, by (ii)a. Then reduction of σ′ 1 implies that the root of σ′ 1 is reachable from the root of σ′ 2, which contradicts (ii)b. ‘Only if’. Assuming (ii)a is not satisfied, we argue that sh is correct for c and Ag. There must be an arc (h →d) not in A with d = v1,1 and h is some token wi in β. From stack σ′ = σ′′|σ′ 2|σ′ 1 it is always possible to construct (h →d) consuming the substring of β up to wi and ending up with stack σ′′|σred, where σred is a stack element with root wi. From there, the parser can move on to the final configuration cF,Ag. A similar argument applies if we assume that (ii)b is not satisfied. ■ From condition (i) in Lemma 1 and from the fact that there are no cycles in Ag, it follows that there is at most one correct transition among the transitions of type lak or rak. From condition (ii) in the lemma we can also see that the existence of a correct transition of type lak or rak for some configuration does not imply that the sh transition is incorrect 140 for the same configuration; see Figures 4(c,d) for examples. It follows that for a correct configuration there might be at most 2 correct transitions. In our training experiments for English in §5 we observe 2 correct transitions for 42% of the reached configurations. This nondeterminism is a byproduct of the adopted dynamic parsing strategy, and eventually leads to the spurious ambiguity of the parser. As already mentioned, we do not impose any canonical form on complete computations that would hardwire a preference for some correct transition and get rid of spurious ambiguity. Following Goldberg and Nivre (2012), we instead regard spurious ambiguity as an additional resource of our parsing strategy. Our main goal is that the training algorithm learns to prefer a sh transition in a configuration that does not provide enough information for the choice of the correct arc. In the context of dependency parsing, the strategy of delaying arc construction when the current configuration is not informative is called the easy-first strategy, and has been first explored by Goldberg and Elhadad (2010). 4.3 Feature Extraction In existing transition-based parsers a set of atomic features is statically defined and extracted from each configuration. These features are then combined together into complex features, according to some feature template, and joined with the available transition types. This is not possible in our system, since the number of transitions lak and rak is not bounded by a constant. Furthermore, it is not meaningful to associate transitions lak and rak, for any k ≥1, always with the same features, since the constructed arcs impinge on nodes at different depths in the involved spines. It seems indeed more significant to extract information that is local to the arc h →d being constructed by each transition, such as for instance the grandparent and the great grandparent nodes of d. This is possible if we introduce a higher level of abstraction than in existing transition-based parsers. We remark here that this abstraction also makes the feature representation more similar to the ones typically found in graph-based parsers, which are centered on arcs or subgraphs of the dependency tree. We index the nodes in the stack σ relative to the head node of the arc being constructed, in case of the transitions lak or rak, or else relative to the root node of σ1, in case of the transition sh. More precisely, let c = (σ, β, A) be a configuration and let t be a transition. We define the context of c and t as the tuple C(c, t) = (s3, s2, s1, q1, q2, gp, gg), whose components are placeholders for word tokens in σ or in β. All these placeholders are specified in Table 1, for each c and t. Figure 5 shows an example of feature extraction for the displayed configuration c = (σ, β, A) and the transition la2. In this case we have s3 = u3,1, s2 = u2,1, s1 = u1,2, q1 = gp = u1,1, q2 = b1; gg = none because the head of gp is not available in c. Note that in Table 1 placeholders are dynamically assigned in such a way that s1 and s2 refer to the nodes in the constructed arc h →d, and gp, gg refer to the grandparent and the great grandparent nodes, respectively, of d. Furthermore, the node assigned to s3 is the parent node of s2, if such a node is defined; otherwise, the node assigned to s3 is the root of the tree fragment in the stack underneath σ2. Symmetrically, placeholders q1 and q2 refer to the parent and grandparent nodes of s1, respectively, when these nodes are defined; otherwise, these placeholders get assigned tokens from the buffer. See again Figure 5. Finally, from the placeholders in C(c, t) we extract a standard set of atomic features and their complex combinations, to define the function φ. Our feature template is an extended version of the feature template of Zhang and Nivre (2011), originally developed for the arc-eager model. The extension is obtained by adding top-down features for left-arcs (based on placeholders gp and gg), and by adding right child features for the first stack element. The latter group of features is usually exploited for the arc-standard model, but is undefined for the arc-eager model. 5 Experimental Assessment Performance evaluation is carried out on the Penn Treebank (Marcus et al., 1993) converted to Stanford basic dependencies (De Marneffe et al., 2006). We use sections 2-21 for training, 22 as development set, and 23 as test set. The part-of-speech tags are assigned by an automatic tagger with accuracy 97.1%. The tagger used on the training set is trained on the same data set by using four-way jackknifing, while the tagger used on the development and test sets is trained on all the training set. We train an arc-labeled version of our parser. In the first three lines of Table 2 we compare 141 context sh lak rak placeholder k = 1 k = 2 k > 2 k = 1 k = 2 k > 2 s1 u1,1 = v1,1 u1,k u1,1 = v1,1 s2 u2,1 = v2,1 u2,1 = v2,1 v2,k s3 u3,1 = v3,1 u3,1 = v3,1 u3,1 = v3,1 v2,k−1 q1 b1 b1 u1,k−1 b1 q2 b2 b2 b1 u1,k−2 b2 gp none none u1,k−1 none v2,k−1 gg none none none u1,k−2 none none v2,k−2 Table 1: Definition of C(c, t) = (s3, s2, s1, q1, q2, gp, gg), for c = (σ′|σ3|σ2|σ1, b1|b2|β, A) and t of type sh or lak, rak, k ≥1. Symbols uj,k and vj,k are the k-th nodes in the left and right spines, respectively, of stack element σj, with uj,1 = vj,1 being the shared root of σj; none is an artificial element used when some context’s placeholder is not available. · · · stack σ u3,1 = v3,1 v3,2 u2,1 = v2,1 u2,2 v2,2 v2,3 u1,1 = v1,1 u1,2 v1,2 u1,3 v1,3 la2 buffer β b1 b2 b3 · · · context extracted for la2 s3 s2 s1 q1=gp q2 Figure 5: Extraction of atomic features for context C(c, la2) = (s3, s2, s1, q1, q2, gp, gg), c = (σ, β, A). parser iter UAS LAS UEM arc-standard 23 90.02 87.69 38.33 arc-eager 12 90.18 87.83 40.02 this work 30 91.33 89.16 42.38 arc-standard + easy-first 21 90.49 88.22 39.61 arc-standard + spine 27 90.44 88.23 40.27 Table 2: Accuracy on test set, excluding punctuation, for unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled exact match (UEM). the accuracy of our parser against our implementation of the arc-eager and arc-standard parsers. For the arc-eager parser, we use the feature template of Zhang and Nivre (2011). The same template is adapted to the arc-standard parser, by removing the top-down parent features and by adding the right child features for the first stack element. It turns out that our feature template, described in §4.3, is the exact merge of the templates used for the arc-eager and the arc-standard parsers. We train all parsers up to 30 iterations, and for each parser we select the weight vector ⃗ω from the iteration with the best accuracy on the development set. All our parsers attach the root node at the end of the parsing process, following the ‘None’ approach discussed by Ballesteros and Nivre (2013). Punctuation is excluded in all evaluation metrics. Considering UAS, our parser provides an improvement of 1.15 over the arc-eager parser and an improvement of 1.31 over the arc-standard parser, that is an error reduction of ∼12% and ∼13%, respectively. Considering LAS, we achieve improvements of 1.33 and 1.47, with an error reduction of ∼11% and ∼12%, over the arc-eager and the arc-standard parsers, respectively. We speculate that the observed improvement of our parser can be ascribed to two distinct components. The first component is the left-/rightspine representation for stack elements, introduced in §3.3. The second component is the easy-first strategy, implemented on the basis of the spurious ambiguity of our parser and the definition of correct/incorrect transitions in §4.2. In this perspective, we observe that our parser can indeed be viewed as an arc-standard model augmented with (i) the spine representation, and (ii) the easy-first strategy. More specifically, (i) generalizes the la/ra transitions to the lak/rak transitions, introducing a top-down component into the purely bottom-up arc-standard. On the other hand, (ii) drops the limitation of canonical computations for the arc-standard, and leverages 142 on the spurious ambiguity of the parser to enlarge the search space. The two components above are mutually independent, meaning that we can individually implement each component on top of an arc-standard model. More precisely, the arc-standard + spine model uses the transitions lak/rak but retains the definition of canonical computation, defined by applying each lak/rak transition as soon as possible. On the other hand, the arc-standard + easy-first model retains the original la/ra transitions but is trained allowing any correct transition at each configuration. In this case the characterization of correct and incorrect configurations in Lemma 1 has been adapted to transitions la/ra, taking into account the bottom-up constraint. With the purpose of incremental comparison, we report accuracy results for the two ‘incremental’ models in the last two lines of Table 2. Analyzing these results, and comparing with the plain arcstandard, we see that the spine representation and the easy-first strategy individually improve accuracy. Moreover, their combination into our model (third line of Table 2) works very well, with an overall improvement larger than the sum of the individual contributions. We now turn to a computational analysis. At each iteration our parser evaluates a number of transitions bounded by γ + 1, with γ the maximum value of the sum of the lengths of the left spine in σ1 and of the right spine in σ2. Quantity γ is bounded by the length n of the input sentence. Since the parser applies exactly 2n + 1 transitions, worst case running time is O(n2). We have computed the average value of γ on our English data set, resulting in 2.98 (variance 2.15) for training set, and 2.95 (variance 1.96) for development set. We conclude that, in the expected case, running time is O(n), with a slow down constant which is rather small, in comparison to standard transition-based parsers. Accordingly, when running our parser against our implementation of the arc-eager and arc-standard models, we measured a slow-down of 2.8 and 2.2, respectively. Besides the change in representation, this slow-down is also due to the increase in the number of features in our system. We have also checked the worst case value of γ in our data set. Interestingly, we have seen that for strings of length smaller than 40 this value linearly grows with n, and for longer strings the growth stops, with a maximum worst case observed value of 22. 6 Concluding Remarks We have presented a novel transition-based parser using a dynamic parsing strategy, which achieves a ∼12% error reduction in unlabeled attachment score over the static arc-eager strategy and even more over the (equally static) arc-standard strategy, when evaluated on English. The idea of representing the right spine of a tree within the stack elements of a shift-reduce device is quite old in parsing, predating empirical approaches. It has been mainly exploited to solve the PP-attachment problem, motivated by psycholinguistic models. The same representation is also adopted in applications of discourse parsing, where right spines are usually called right frontiers; see for instance Subba and Di Eugenio (2009). In the context of transition-based dependency parsers, right spines have also been exploited by Kitagawa and Tanaka-Ishii (2010) to decide where to attach the next word from the buffer. In this paper we have generalized their approach by introducing the symmetrical notion of left spine, and by allowing attachment of full trees rather than attachment of a single word.2 Since one can regard a spine as a stack in itself, whose elements are tree nodes, our model is reminiscent of the embedded pushdown automata of Schabes and Vijay-Shanker (1990), used to parse tree adjoining grammars (Joshi and Schabes, 1997) and exploiting a stack of stacks. However, by imposing projectivity, we do not use the extra-power of the latter class. An interesting line of future research is to combine our dynamic parsing strategy with a training method that allows the parser to explore transitions that apply to incorrect configurations, as in Goldberg and Nivre (2012). Acknowledgments We wish to thank Liang Huang and Marco Kuhlmann for discussion related to the ideas reported in this paper, and the anonymous reviewers for their useful suggestions. The second author has been partially supported by MIUR under project PRIN No. 2010LYA9RH 006. 2Accuracy comparison of our work with Kitagawa and Tanaka-Ishii (2010) is not meaningful, since these authors have evaluated their system on the same data set but based on gold part-of-speech tags (personal communication). 143 References Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL), pages 166– 170. Miguel Ballesteros and Joakim Nivre. 2013. Going to the roots of dependency parsing. Computational Linguistics, 39(1):5–13. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1455– 1465. Jinho D. Choi and Martha Palmer. 2011. Getting the most out of transition-based dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 687–692. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 263–270, Philadephia, Pennsylvania. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), volume 6, pages 449–454. Yoav Freund and Robert E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, December. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 742– 750, Los Angeles, USA. Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proceedings of the 24th International Conference on Computational Linguistics (COLING), pages 959–976. Aravind K. Joshi and Yves Schabes. 1997. TreeAdjoining Grammars. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, volume 3, pages 69–123. Springer. Kotaro Kitagawa and Kumiko Tanaka-Ishii. 2010. Tree-based deterministic dependency parsing — an application to Nivre’s method —. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL) Short Papers, pages 189–193. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 122–131. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the Eighth International Workshop on Parsing Technologies (IWPT), pages 149–160, Nancy, France. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50–57, Barcelona, Spain. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Yves Schabes and K. Vijay-Shanker. 1990. Deterministic left to right parsing of tree adjoining languages. In Proceedings of the 28th annual meeting of the Association for Computational Linguistics (ACL), pages 276–283, Pittsburgh, Pennsylvania. Rajen Subba and Barbara Di Eugenio. 2009. An effective discourse parser that uses rich linguistic information. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 566–574. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT), pages 195–206. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 562– 571. Yue Zhang and Joakim Nivre. 2011. Transition-based parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 188–193. 144
2013
14
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1425–1434, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Learning a Phrase-based Translation Model from Monolingual Data with Application to Domain Adaptation Jiajun Zhang and Chengqing Zong National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing, China {jjzhang, cqzong}@nlpr.ia.ac.cn Abstract Currently, almost all of the statistical machine translation (SMT) models are trained with the parallel corpora in some specific domains. However, when it comes to a language pair or a different domain without any bilingual resources, the traditional SMT loses its power. Recently, some research works study the unsupervised SMT for inducing a simple word-based translation model from the monolingual corpora. It successfully bypasses the constraint of bitext for SMT and obtains a relatively promising result. In this paper, we take a step forward and propose a simple but effective method to induce a phrase-based model from the monolingual corpora given an automatically-induced translation lexicon or a manually-edited translation dictionary. We apply our method for the domain adaptation task and the extensive experiments show that our proposed method can substantially improve the translation quality. 1 Introduction During the last decade, statistical machine translation has made great progress. Novel translation models, such as phrase-based models (Koehn et a., 2007), hierarchical phrase-based models (Chiang, 2007) and linguistically syntax-based models (Liu et a., 2006; Huang et al., 2006; Galley, 2006; Zhang et al, 2008; Chiang, 2010; Zhang et al., 2011; Zhai et al., 2011, 2012) have been proposed and achieved higher and higher translation performance. However, all of these state-of-the-art translation models rely on the parallel corpora to induce translation rules and estimate the corresponding parameters. It is unfortunate that the parallel corpora are very expensive to collect and are usually not available for resource-poor languages and for many specific domains even in a resource-rich language pair. Recently, more and more researchers concentrated on taking full advantage of the monolingual corpora in both source and target languages, and proposed methods for bilingual lexicon induction from non-parallel data (Rapp, 1995, 1999; Koehn and Knight, 2002; Haghighi et al., 2008; Daumé III and Jagarlamudi, 2011) and proposed unsupervised statistical machine translation (bilingual lexicon is a byproduct) with only monolingual corpora (Ravi and Knight, 2011; Nuhn et al., 2012; Dou and Knight, 2012). In the bilingual lexicon induction (Koehn and Knight, 2002; Haghighi et al., 2008; Daumé III and Jagarlamudi, 2011), with the help of the orthographic and context features, researchers adopted an unsupervised method, such as canonical correlation analysis (CCA) model, to automatically induce the word translation pairs between two languages from non-parallel data only requiring that the monolingual data in each language are from a fairly comparable domain. The unsupervised statistical machine translation method (Ravi and Knight, 2011; Nuhn et al., 2012; Dou and Knight, 2012) viewed the translation task as a decipherment problem and designed a generative model with the objective function to maximize the likelihood of the source language monolingual data. To tackle the large-scale vocabulary, they mainly considered the word-based model (e.g. IBM Model 3) and applied the Bayesian method with Gibbs sampling or slice sampling. Finally, they used the learned translation model directly to translate unseen data (Ravi and Knight, 2011; Nuhn et al., 2012) or incorporated the learned bilingual lexicon as a new in-domain translation resource into the phrase-based model which is trained with out-of-domain data to improve the domain adaptation performance in machine translation (Dou and Knight, 2012). We can easily see that these unsupervised methods can only induce the word-based translation rules (bilingual lexicon) at present. It is a big challenge that whether we can induce phrase 1425 1, word reordering example: 本 发明 的 目的 在于 ||| the purpose of the invention is to ||| 0-0 0-3 1-4 2-2 3-1 4-5 4-6 2, idiom example: 辨识 真伪 的 ||| distinguish the true from the false ||| 0-0 1-2 1-5 2-1 2-4 3, unknown word translation: 发光 二极管 芯片 的 ||| of the light-emitting diode chip ||| 0-2 1-2 2-4 3-0 3-1 Table 1: Examples of new translation knowledge learned with the proposed phrase pair induction method. For the three fields separated by “|||”, the first two are respectively Chinese and English phrase, and the last one is the word alignment between these two phrases. level translation rules and learn a phrase-based model from the monolingual corpora. In this paper, we focus on exploring this direction and propose a simple but effective method to induce the phrase-level translation rules from monolingual data. The main idea of our method is to divide the phrase-level translation rule induction into two steps: bilingual lexicon induction and phrase pair induction. Since many researchers have studied the bilingual lexicon induction, in this paper, we mainly concentrate ourselves on phrase pair induction given a probabilistic bilingual lexicon and two in-domain large monolingual data (source and target language). In addition, we will further introduce how to refine the induced phrase pairs and estimate the parameters of the induced phrase pairs, such as four standard translation features and phrase reordering feature used in the conventional phrase-based models (Koehn et al., 2007). The induced phrase-based model will be used to help domain adaptation for machine translation. In the rest of this paper, we first explain with examples to show what new translation knowledge can be learned with our proposed phrase pair induction method (Section 2), and then we introduce the approach for probabilistic bilingual lexicon acquisition in Section 3. In Section 4 and 5, we respectively present our method for phrase pair induction and introduce an approach for phrase pair refinement and parameter estimation. Section 6 will show the detailed experiments for the task of domain adaptation. We will introduce some related work in Section 7 and conclude this paper in Section 8. 2 What Can We Learn with Phrase Pair Induction? Readers may doubt that if phrase pair induction is performed only using bilingual lexicon and monolingual data, what new translation knowledge can be learned? The bilingual lexicon can only express the translation equivalence between source- and target-side word pair and has little ability to deal with word reordering and idiom translation. In contrast, phrase pair induction can make up for this deficiency to some extent. Furthermore, our method is able to learn some unknown word translations. From the induced phrase pairs with our method, we have conducted a deep analysis and find that we can learn three kinds of new translation knowledge: 1) word reordering in a phrase pair; 2) idioms; and 3) unknown word translations. Table 1 gives examples for each of the three kinds. For the first example, the source and target phrase are extracted respectively from monolingual data, each word in the source phrase has a translation in the target phrase, but the word order is different. The word order encoded in a phrase pair is difficult to learn in a word-based SMT. In the second example, the italic source word corresponds to two target words (in italic), and the phrase pair is an idiom which cannot be learned from word-based SMT. In the third example, as we learn from the source and target monolingual text that the words around the italic ones are translations with each other, thus we cannot only extract a new phrase pair but also learn a translation pair of unknown words in italic. 3 Probabilistic Bilingual Lexicon Acquisition In order to induce the phrase pairs from the indomain monolingual data for domain adaptation, the probabilistic bilingual lexicon is essential. In this paper, we acquire the probabilistic bilingual lexicon from two approaches: 1) build a bilingual lexicon from large-scale out-of-domain parallel data; 2) adopt a manually collected indomain lexicon. This paper uses Chinese-toEnglish translation as a case study and electronic data is the in-domain data we focus on. 1426 In Chinese-to-English translation, there are lots of parallel data on News. Here, we utilize about 2.08 million sentence pairs1 in News domain to learn a probabilistic bilingual lexicon. Basically, we can use GIZA++ (Och, 2003) to get the probabilistic lexicon. However, the problem is that each source-side word associates too many possible translations which contain much noise. For instance, in the lexicon obtained with GIZA++, each source-side word has about 13 translations on average. The noise of the lexicon can influence the accuracy of the induced phrase pairs to a large extent. To learn a lexicon with a high precision, we follow Munteanu and Marcu (2006) to apply Log-Likelihood-Ratios (Dunning, 1993; Melamed, 2000; Moore, 2004a, 2004b) to estimate how strong the association is between a source-side word and its aligned target-side word. We employ the same algorithm used in (Munteanu and Marcu, 2006) which first use the GIZA++ (with grow-diag-final-and heuristic) to obtain the word alignment between source and target words, and then calculate the association strength between the aligned words. After using the log-likelihood-ratios algorithm2, we obtain a probabilistic bilingual lexicon with bidirectional translation probabilities from the out-of-domain data. In the final lexicon, the number of average translations is only 5. We call this lexicon LLRlex. In the electronic domain, we manually collected a lexicon which contains about 140k entries. It should be noted that there is no translation probability in this lexicon. In order to assign probabilities to each entry, we apply the Corpus Translation Probability which used in (Wu et al., 2008): given an in-domain source language monolingual data, we translate this data with the phrase-based model trained on the out-of-domain News data, the in-domain lexicon and the indomain target language monolingual data (for language model estimation). With the source language data and its translation, we estimate the bidirectional translation probabilities for each entry in the original lexicon. For the entries whose translation probabilities are not estimated, we just assign a uniform probability. That is if a source word has n translations, then the translation probability of target word given the source word is 1/n. We call this lexicon Domain-lex. 1 LDC category numbers are: LDC2000T50, LDC2003E14, LDC2003E07, LDC2004T07, LDC2005T06, LDC2002L27, LDC2005T10 and LDC2005T34. 2 Following Moore (2004b), we use the threshold 10 on LLR to filter out unlikely translations. We combine LLR-lex and Domain-lex to obtain the final probabilistic bilingual lexicon for phrase pair induction. 4 Phrase Pair Induction Method Given a probabilistic bilingual lexicon and two monolingual data, we present a simple but effective method for phrase pair induction in this section. Figure 1: a naïve algorithm for phrase pair induction. 4.1 A Naïve Method We first introduce a relatively naïve way to extract phrase pairs from the given resources. For a source phrase (word sequence), we can reorder the words in the phrase (permutation) first, and then obtain the target phrases with the bilingual lexicon (translation), and finally check if the target phrase is in the target monolingual data. The algorithm is given in Figure 1. Figure 1 shows that the naïve algorithm is very easy to implement. However, the time complexity is too high. For each source phrase j is (with  1 ! j i  permutations), suppose a source word has C translations on average and checking whether the target phrase ' ' j it in T needs time   O T , then, phrase pair induction for a single source phrase needs time     1 1 ! j i O C T j i   . It is very time consuming. One may design smarter algorithms. For example, one can collect distinct n-grams from source and target monolingual data. Then, for a source-side phrase with length L, one can find the best translation candidate using the probabilistic bilingual lexicon from the target-side phrases with the same length L. The biggest disadvantage of these algorithms is that they can only induce phrase pair (with the Input: Probabilistic bilingual lexicon V (each source word s maps a translation set V[s]) Source language monolingual data S={sn} n=1...N Target language monolingual data T={tm} m=1...M Output: Phrase pairs P 1: For each distinct source-side phrase j is in S: 2: If each j k i s s  in V: 3: Collect [ ]j k k i V s  4: For each permutation ' ' j is of j is : 5: If ' ' j it in T: ' ' [ ] ' [ , ] k k t V s k i j   6: Add phrase pair   ' ' , j j i i s t into P 1427 same length) encoding word reordering, but cannot learn phrase pairs in different length. Furthermore, they cannot learn idioms and unknown word translations from monolingual data. Obviously, these kind of approaches is not optimal. 4.2 Phrase Pair Induction with Inverted Index In order to make the phrase pair induction both effective and efficient, we propose a method using inverted index data structure which is usually a central component of a typical search engine. The inverted index is employed to represent the target language monolingual data. For a target language word, the inverted index not only records the sentence position in monolingual data, but also records the word position in a sentence. Some examples are shown in Table 2. By doing this, we do not need to iterate all the permutations of source language phrase j is to explore possible phrase pairs encoding word reordering. Furthermore, it is possible to learn idiom translation and unknown word translations. We will elaborate how to induce phrase pairs with the help of inverted index. Target Language Word Position communication (2,5), (106,20), …, (23022, 12) … … zoom (90,2), (280,21), …, (90239,15) Table 2: Some examples of inverted index for target language words, (2,5) means that “communication” occurs at the 5th word of the 2nd sentence in the target monolingual data. The new algorithm for phrase pair induction is presented in Figure 2. Line 1 iterates all the distinct phrases in the source-side monolingual data. It can be implemented by collecting all the distinct n-grams in which n is the phrase length we are interested in (3 to 7 in this paper). For each distinct source-side phrase, Line 2-5 efficiently collects all the positions in the target monolingual data for the translations of each word in the source phrase. Line 6 sorts the positions so that we can easily find the position sequence belonging to a same sentence. Line 8-9 discards all the position sub-sequences that lack translations for more than one source-side words. That is to say we allow at most one unknown word in an induced phrase pair in order to make the induction more accurate. Line 10 and Line 12 is the core of this algorithm. We first define a constraint before detailing the algorithm. Figure 2: Phrase pair induction using inverted index. Constraint: we require that there exists at most one phrase in a target sentence that is the translation of the source-side phrase. According to our analysis, it is not often to find that two phrases (length larger than 2) in a same sentence have the same meaning. Even if it happens, it is reasonable to keep the one with the highest probability. Given a position sequence belonging to a same sentence, Line 10 smoothes the probability of the single word gap according to the probabilities of the around words. Single word gap means that this word is not aligned but its left and right words are aligned with the words of the source-side phrase. Suppose the target sub-sequence is i i r j t t t  and i r t  is the only word that is not aligned with source-side words. We smooth the probability   | i r p t null  as follows:            1 1 1 1 min | , | , 1 1 2 | | | , 2 i j i r i r i t j t i r i r t i r t p t s p t s if r or r j p t null p t s p t s otherwise                (1) The above formula means that if the left or the right side only has one word, then the smoothed probability is one half of the minimum of the probabilities of the two neighbors, otherwise the smoothed probability is the average of the probabilities of the two neighbors. This smoothing strategy encourages that if more words around the un-aligned word are translations of the source-side phrase, then the gap word is more likely to belong to the translations of the sourceside phrase. Input: Probabilistic bilingual lexicon V (each source word s maps a translation set V[s]) Source language monolingual data S={sn} n=1...N Inverted index representing target language monolingual data IMap Output: Phrase pairs P 1: For each distinct source-side phrase j is in S: 2: positionArray = [] 3: For each j k i s s  : 4: For each [ ] k t V s  : 5: add IMap[ t ] into positionArray 6: Sort positionArray 7: For each sequence in a same sentence in positionArray: 8: If more than 1 word in j is has no trans in the seq: 9: Discard this seq and continue 10: Probability smoothing for single word gap 11: For all continuous position sub-sequence: 12: Find the one k ht with maximum probability 13: Add phrase pair   , j k i h s t into P 1428 After probability smoothing of the single gap word, we are ready to extract the candidate translation of the source-side phrase. Similar with Line 9 in Figure 2, we further filter the target continuous phrase if more than one word in source-side phrase has no translation in this target phrase. After that, we just choose the continuous target phrase with the largest probability if two or more continuous target phrases exist in the same target sentence. The probability of a target-side phrase given the source-side phrase is computed similar to that of (Koehn et al., 2003) except that we impose length normalization:           1 , 1 1 | , | | , n n lex i j i j a i p t s a p t s j i j a               (2) where the alignment a is produced using probabilistic bilingual lexicon. If a target word in t is a gap word, we suppose there is a word alignment between the target gap word and the source-side null. Similarly, we can compute the probability of source-side phrase given the target-side phrase   | , lex p s t a . Then, we find the target-side phrase which has the biggest value of     | , | , lex lex p t s a p s t a  . Line 13 in Figure 2 collects the induced phrase pairs. For the time complexity, it depends on the length of positionArray, since the time complexity of the core algorithm (Line 7-13) is proportional to the length of positionArray. If positionArray contains almost all the positions in the target monolingual data T, then the worst time complexity will be   log O T T (for array sort). However, we find in the target monolingual data (1 million sentences) that each distinct word happens 110 times on average. Then, for a sources-side phrase with 7 words, the average length of positionArray will be 3850, since each source word has averagely 5 target translations (mentioned in Section 3). Therefore, the algorithm is relatively efficient in the average case. 5 Phrase Pair Refinement and Parameterization 5.1 Phrase Pair Refinement Some of the phrase pairs induced in Section 4 may contain noise. According to our analysis, we find that the biggest problem is that in the target-side of the phrase pair, there are two or more identical words aligned to the same sourceside word. For example, we extract a phrase pair as follows: 的 商业 信息 of business information of In the above phrase pair, there are two words “of” in the target side and the first one is redundant. The phrase pair induction algorithm presented in Section 4 cannot deal with this situation. In this section, we propose a simple approach to handle this problem. For each entry in LLR-lex, such as (的, of), we can learn two kinds of information from the out-of-domain wordaligned sentence pairs: one is whether the target translation is before or after the translation of the preceding source-side word (Order); the other is whether the target translation is adjacent with the translation of the preceding source-side word (Adjacency). If the source-side word is the beginning of the phrase, we calculate the corresponding information with the succeeding word instead of the preceding word. For the entries in Domain-lex, we constrain that the target translation should be adjacent with the translations of its source-side neighbors and translation order is the same with the source-side words. With the Order and Adjacency information, we first check the order information, and then check the adjacency information if the duplicates cannot be handled using order information. For example, since (的, of) is an entry in LLRlex and we have learned that “of” is much more likely to be behind the translation of the succeeding word. Thus, the first word “of” can be discarded. This refinement can be applied before finding the phrase pair with maximum probability (Line 12 in Figure 2) so that the duplicate words do not affect the calculation of translation probability of phrase pair. 5.2 Translation Probability Estimation It is well known that in the phrase-based SMT there are four translation probabilities and the reordering probability for each phrase pair. The translation probabilities in the traditional phrase-based SMT include bidirectional phrase translation probabilities and bidirectional lexical weights. For the lexical weights, we can use the   | , lex p s t a and   | , lex p t s a computed in the above section without length normalization. However, for the phrase-level probability, we cannot use maximum likelihood estimation since the phrase pairs are not extracted from parallel sentences. 1429 In this paper, we borrow and extend the idea of (Klementiev et al., 2012) to calculate the phraselevel translation probability with context information in source and target monolingual corpus. The value is calculated using a vector space model. With source and target vocabularies   1 2 , , , N s s s and   1 2 , , , M t t t , the source-side phrase s and target-side phrase t can be respectively represented in an N- and M-dimensional vector. The k-th component of s’s contextual vector is computed using the method of (Fung and Yee, 1998) as follows:     , max log / 1 k s k k w n n n    (3) where ,s k n and kn denotes the number of times ks occurs in the context of s and in the entire source language monolingual data, and max n is the maximum number of occurrence of any source-side word in the source language monolingual data. The k-th element of t’s vector can be computed with the same method. We finally normalize these vectors with L2-norm. With the s’s and t’s contextual vector representations, we calculate two similarities: 1) project s’s vector into target side t with the lexical mapping p(t|s), and then get the similarity by computing the cosine of two angles between t’s and t ’s vectors; 2) project t’s vector into source side s with the lexical mapping p(s|t), and then obtain the similarity between s’s and s ’s vectors. These two contextual similarities will serve as two phrase-level translation probabilities. 5.3 Reordering Probability Estimation For the reordering probabilities of newly induced phrase pairs, we can also follow Klementiev et al. (2012) to estimate these probabilities using source and target monolingual data. The method is to calculate six probabilities for monotone, swap or discontinuous cases. For the phrase pair (的 商业 信息, business information of), we find a source sentence containing 的 商业 信息, and find a target sentence containing business information of. If there is another phrase pair   ,s t , t exactly follows business information of and s occurs in the same source sentence with 的 商业 信息, then we compare the position relationship between s and 的 商业 信息. We increment the swap count if s is just before 的 商业 信息. After counting, we finally use maximum likelihood estimation method to compute the reordering probabilities. 6 Related Work As far as we know, few researchers study phrase pair induction from only monolingual data. There are three research works that are most related with ours. One is using an in-domain probabilistic bilingual lexicon to extract subsentential parallel fragments from comparable corpora (Munteanu and Marcu, 2006; Quirk et al., 2007; Cettolo et al., 2010). Munteanu and Marcu (2006) first extract the candidate parallel sentences from the comparable corpora and further extract the accurate sub-sentential bilingual fragments from the candidate parallel sentences using the in-domain probabilistic bilingual lexicon. Compared with their work, our focus is to induce phrase pairs directly from monolingual data rather than comparable data. Thus, finding the candidate parallel sentences is not possible in our situation. Another is to make full use of monolingual data with transductive learning (Ueffing et al., 2007; Schwenk, 2008; Wu et al., 2008; Bertoldi and Federico, 2009). For the target-side monolingual data, they just use it to train language model, and for the source-side monolingual data, they employ a baseline (word-based SMT or phrasebased SMT trained with small-scale bitext) to first translate the source sentences, combining the source sentence and its target translation as a bilingual sentence pair, and then train a new phrase-base SMT with these pseudo sentence pairs. This method cannot learn idiom translations and unknown word translations. The third is to estimate the translation parameters and reordering parameters using monolingual data given the phrase pairs (Klementiev et al., 2012). Their work supposes the phrase pairs are already given and then corresponding parameters can be learned with monolingual data. Different from their work, we concentrate ourselves on inducing phrase pairs from monolingual data and then borrow some ideas from theirs for parameter estimation. Furthermore, we extend their contextual similarity between source and target phrases to both directions. 7 Experiments 7.1 Experimental Setup Our purpose is to induce phrase pairs to improve translation quality for domain adaptation. We have introduced the out-of-domain data and the electronic in-domain lexicon in Section 3. Here we introduce other information about the in1430 domain data. Besides the in-domain lexicon, we have collected respectively 1 million monolingual sentences in electronic area from the web. They are neither parallel nor comparable because we cannot even extract a small number of parallel sentence pairs from this monolingual data using the method of (Munteanu and Marcu, 2006). We further employ experts to translate 2000 Chinese electronic sentences into English. The first half is used as the tuning set (elec1000tune) and the second half is employed as the testing set (elec1000-test). We construct two kinds of phrase-based models using Moses (Koehn et al., 2007): one uses out-of-domain data and the other uses in-domain data. For the out-of-domain data, we build the phrase table and reordering table using the 2.08 million Chinese-to-English sentence pairs, and we use the SRILM toolkit (Stolcke, 2002) to train the 5-gram English language model with the target part of the parallel sentences and the Xinhua portion of the English Gigaword. For the in-domain electronic data, we first consider the lexicon as a phrase table in which we assign a constant 1.0 for each of the four probabilities, and then we combine this initial phrase table and the induced phrase pairs to form the new phrase table. The in-domain reordering table is created for the induced phrase pairs. An in-domain 5gram English language model is trained with the target 1 million monolingual data. We use BLEU (Papineni et al., 2002) score with shortest length penalty as the evaluation metric and apply the pairwise re-sampling approach (Koehn, 2004) to perform the significance test. 7.2 Experimental Results In this section, we first conduct experiments to figure out how the translation performance degrades when the domain changes. To better illustrate the comparison, we first use News data to evaluate the NIST evaluation tests and then use the same News data to evaluate the electronic test sets. For the NIST evaluation, we employ Chinese-to-English NIST MT03 as the tuning set and NIST MT05 as the test set. Table 3 gives the results. It is obvious that, it is relatively high when using the News training data to evaluate the same News test set. However, when the test domain is changed, the translation performance decreases to a large extent. Given the in-domain bilingual lexicon and two monolingual data, previous works also proposed some good methods to explore the potential of the given data to improve the translation quality. Here, we implement their approaches and use them as our strong baseline. Wu et al. (2008) regards the in-domain lexicon with corpus translation probability as another phrase table and further use the in-domain language model besides the out-of-domain language model. Table 4 gives the results. We can see from the table that the domain lexicon is much helpful and significantly outperforms the baseline with more than 4.0 BLEU points. When it is enhanced with the in-domain language model, it can further improve the translation performance by more than 2.5 BLEU points. This method has made good use of in-domain lexicon and the target-side indomain monolingual data, but it does not take full advantage of the in-domain source-side monolingual data. In order to use source-side monolingual data, Ueffing et al. (2007), Schwenk (2008), Wu et al. (2008) and Bertoldi and Federico (2009) employed the transductive learning to first translate the source-side monolingual data using the best configuration (baseline+in-domain lexicon+indomain language model) and obtain 1-best translation for each source-side sentence. With the source-side sentences and their translations, the new phrase table and reordering table are built. Then, these resources are added into the best configuration. The experimental results are presented in the last low of Table 4. From the results, we see that transductive learning can further improve the translation performance significantly by 0.6 BLEU points. In tranductive learning, in-domain lexicon and both-side monolingual data have been explored. However, this method does not take full advantage of both-side monolingual data because it uses source and target monolingual data individually. In our method, we explore fully the source and target monolingual data to induce translation equivalence on the phrase level. In order to make the phrase pair induction more efficient, we first sort all the sentences in the both-side monolingual data according to the word hit rate in the bilingual lexicon. Then, we conduct six sets of experiments respectively on the first 100k, 200k, 300k, 500k and whole 1m sentences. All the experiments are run based on the configuration with BLEU 13.41 in Table 4, and we call this configuration BestConfig. Note that the unknown words are only allowed if the source-side of a phrase pair has more than 3 words. Table 5 shows the results. 1431 Training Data Tune Data (NIST MT03) Test Data (NIST MT05) 2.08M sentence pairs in News 35.79 34.26 Tune Data (elec1000-tune) Test Data (elec1000-test) 7.93 6.69 Table 3: Experimental results using News training data to test NIST evaluation data and electronic data (numbers denote BLEU score points in percent). Method Tune (elec1000-tune) Test (elec1000-test) Baseline 7.93 6.69 baseline + in-domain lexicon 10.97 10.87 baseline + in-domain lexicon + indomain language model 13.72 13.41++ Transductive Learning 14.13 14.01* Table 4: Experimental results using News training data, in-domain lexicon, language model and transductive learning. Bold figures mean that the results are statistically significant better than the baseline with p<0.01, and “++” denotes the result is statistically significant better than baseline+in-domain lexicon. “*” means that the result is statistically significant better than 13.41 with p<0.05. Method Tune (BLEU %) Test (BLEU %) BestConfig 13.72 13.41 +phrase pair induction (100k) 14.23 14.06 +phrase pair induction (200k) 14.45 14.24 +phrase pair induction (300k) 14.76 14.83++ +phrase pair induction (500k) 14.98 15.16++ +phrase pair induction (1m) 15.11 15.30++ Table 5: Experimental results of our phrase pair induction method. Bold figures denotes the corresponding method significantly outperform the BestConfig with p<0.05. Bold and Italic figures means the results are significantly better than that of BestConfig with p<0.01. “++” denotes that the corresponding approach performs significantly better than Transductive Learning with p<0.01. Method Before Filtering After Filtering +phrase pair induction (100k) 72,615 8,724 +phrase pair induction (200k) 108,948 12,328 +phrase pair induction (300k) 136,529 17,505 +phrase pair induction (500k) 150,263 19,862 +phrase pair induction (1m) 169,172 21,486 Table 6: the number of phrase pairs induced with different size of monolingual data. We can see from the table that our method obtains the best translation performance. When using the first 100k sentences for phrase pair induction, it obtains a significant improvement over the BestConfig by 0.65 BLEU points and can outperform the transductive learning method. When we use more monolingual data, the performance becomes even better. The method of phrase pair induction using 300k sentences performs quite well. It outperforms the BestConfig significantly with an improvement of 1.42 BLEU points and it also performs much better than transductive learning method with gains of 0.82 BLEU points. With the monolingual data larger and larger, the gains become smaller and smaller because the word hit rate gets lower and lower. These experimental results empirically show the effectiveness of our proposed phrase pair induction method. A question remains that how many new phrase pairs are induced with different size of monolingual data. Here, we give respectively the statistics before and after filtering with the 1000 test sentences. Table 6 shows the statistics. We can see from the table that lots of new phrase pairs can be induced since the source and target monolingual data is in the same domain. However, since the source and target monolingual data is 1432 far from parallel, most of the phrase pairs are not long. For example, in the 108,948 distinct phrase pairs, we find that the phrase pair distribution according to source-side length is (3:50.6%, 4:35.6%, 5:3.3%, 6:9.8%, 7:0.7%). It is easy to see that the phrase pairs whose source-side length longer than 4 account for only a very small part. 8 Conclusion and Future Work This paper proposes a simple but effective method to induce phrase pairs from monolingual data. Given the probabilistic bilingual lexicon and both-side monolingual data in the same domain, the method employs inverted index structure to represent the target-side monolingual data, and induce the translations for each distinct sourceside phrase with the help of the bilingual lexicon. We further propose an approach to refine the result phrase pairs to make them more accurate. We also introduce how to estimate the translation and reordering parameters for the induced phrase pairs with monolingual data. Extensive experiments on domain adaptation have shown that our method can significantly outperform previous methods which also focus on exploring the indomain lexicon and monolingual data. However, through the analysis we find that our induced phrase pairs still contain some noise, such as the words in source- and target-side of the phrase pair are all aligned but the target-side phrase expresses the different meaning. Furthermore, our proposed method cannot learn expressions which are not lexical translations but are semantic ones. In the future, we will study further on these phenomena and propose new methods to handle these problems. Acknowledgments The research work has been funded by the HiTech Research and Development Program (“863” Program) of China under Grant No. 2011AA01A207, 2012AA011101 and 2012AA011102, and also supported by the Key Project of Knowledge Innovation of Program of Chinese Academy of Sciences under Grant No. KGZD-EW-501. We would also like to thank the anonymous reviewers for their valuable suggestions. References Nicola Bertoldi and Marcello Federico, 2009. Domain adaptation for statistical machine translation with monolingual resources. In Proc. of the Fourth Workshop on Statistical Machine Translation, pages 182-189. Mauro Cettolo, Marcello Federico and Nicola Bertoldi, 2010. Mining parallel fragments from comparable texts. In Proc. of the seventh International Workshop on Spoken Language Translation (IWSLT), pages 227-234. David Chiang, 2007. Hierarchical phrase-based translation. computational linguistics, 33 (2). pages 201-228. David Chiang, 2010. Learning to translate with source and target syntax. In Proc. of ACL 2010, pages 1443-1452. Hal Daumé III and Jagadeesh Jagarlamudi, 2011. Domain adaptation for machine translation by mining unseen words. In Proc. of ACL-HLT 2011. Qing Dou and Kevin Knight, 2012. Large Scale Decipherment for Out-of-Domain Machine Translation. In Proc. of EMNLP-CONLL 2012. Ted Dunning, 1993. Accurate methods for the statistics of surprise and coincidence. computational linguistics, 19 (1). pages 61-74. Pascale Fung and Lo Yuen Yee, 1998. An IR approach for translating new words from nonparallel, comparable texts. In Proc. of ACLCOLING 1998., pages 414-420. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang and Ignacio Thayer, 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of COLING-ACL 2006, pages 961-968. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick and Dan Klein, 2008. Learning bilingual lexicons from monolingual corpora. In Proc. of ACL-08: HLT, pages 771-779. Liang Huang, Kevin Knight and Aravind Joshi, 2006. A syntax-directed translator with extended domain of locality. In Proc. of AMTA 2006, pages 1-8. Alexandre Klementiev, Ann Irvine, Chris CallisonBurch and David Yarowsky, 2012. Toward statistical machine translation without parallel corpora. In Proc. of EACL 2012., pages 130-140. Philipp Koehn, 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP 2004., pages 388-395, Barcelona, Spain, July 25th-26th, 2004. Philipp Koehn, Hieu Hoang, Alexandra Birch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin and Evan Herbst, 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL on Interactive Poster and Demonstration Sessions 2007., pages 177-180, Prague, Czech Republic, June 27th-30th, 2007. Philipp Koehn and Kevin Knight, 2002. Learning a translation lexicon from monolingual corpora. In 1433 Proc. of the ACL-02 workshop on Unsupervised lexical acquisition, pages 9-16. Yang Liu, Qun Liu and Shouxun Lin, 2006. Tree-tostring alignment template for statistical machine translation. In Proc. of COLING-ACL 2006, pages 609-616. I. Dan Melamed, 2000. Models of translational equivalence among words. computational linguistics, 26 (2). pages 221-249. Rorbert C. Moore, 2004a. Improving IBM wordalignment model 1. In Proc. of ACL 2004. Rorbert C. Moore, 2004b. On log-likelihood-ratios and the significance of rare events. In Proc. of EMNLP 2004., pages 333-340. Dragos Stefan Munteanu and Daniel Marcu, 2006. Extracting parallel sub-sentential fragments from non-parallel corpora. In Proc. of ACL-COLING 2006. Malte Nuhn, Arne Mauser and Hermann Ney, 2012. Deciphering Foreign Language by Combining Language Models and Context Vectors. In Proc. of ACL 2012. Franz Josef Och and Hermann Ney., 2003. A systematic comparison of various statistical alignment models. computational linguistics, 29 (1). pages 19-51. Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu, 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL 2002., pages 311-318. Chris Quirk, Raghavendra Udupa and Arul Menezes, 2007. Generative models of noisy translations with applications to parallel fragment extraction. In Proc. of the Machine Translation Summit XI, pages 377-384. Reinhard Rapp, 1995. Identifying word translations in non-parallel texts. In Proc. of ACL 1995, pages 320-322. Reinhard Rapp, 1999. Automatic identification of word translations from unrelated English and German corpora. In Proc. of ACL 1999, pages 519-526. Sujith Ravi and Kevin Knight, 2011. Deciphering foreign language. In Proc. of ACL 2011., pages 12-21. Holger Schwenk, 2008. Investigations on largescale lightly-supervised training for statistical machine translation. In Proc. of IWSLT 2008, pages 182189. Andreas Stolcke, 2002. SRILM-an extensible language modeling toolkit. In Proc. of 7th International Conference on Spoken Language Processing, pages 901-904, Denver, Colorado, USA, September 16th-20th, 2002. Nicola Ueffing, Gholamreza Haffari and Anoop Sarkar, 2007. Transductive learning for statistical machine translation. In Proc. of ACL 2007. Hua Wu, Haifeng Wang and Chengqing Zong, 2008. Domain adaptation for statistical machine translation with domain dictionary and monolingual corpora. In Proc. of COLING 2008., pages 993-1000. Feifei Zhai, Jiajun Zhang, Yu Zhou and Chengqing Zong, 2011. Simple but effective approaches to improving tree-to-tree model. In Proc. of MT Summit XIII 2011, pages 261-268. Feifei Zhai, Jiajun Zhang, Yu Zhou and Chengqing Zong, 2012. Tree-based translation without using parse trees. In Proc. of COLING 2012, pages 3037-3054. Jiajun Zhang, Feifei Zhai and Chengqing Zong, 2011. Augmenting string-to-tree translation models with fuzzy use of the source-side syntax. In Proc. of EMNLP 2011, pages 204-215. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan and Sheng Li, 2008. A tree sequence alignment-based tree-to-tree translation model. In Proc. of ACL-08: HLT, pages 559-567. 1434
2013
140
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1435–1445, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics SenseSpotting: Never let your parallel data tie you to an old domain Marine Carpuat1, Hal Daum´e III2, Katharine Henry3, Ann Irvine4, Jagadeesh Jagarlamudi5, Rachel Rudinger6 1 National Research Council Canada, [email protected] 2 CLIP, University of Maryland, [email protected] 3 CS, University of Chicago, [email protected] 4 CLSP, Johns Hopkins University, [email protected] 5 IBM T.J. Watson Research Center, [email protected] 6 CLSP, Johns Hopkins University, [email protected] Abstract Words often gain new senses in new domains. Being able to automatically identify, from a corpus of monolingual text, which word tokens are being used in a previously unseen sense has applications to machine translation and other tasks sensitive to lexical semantics. We define a task, SENSESPOTTING, in which we build systems to spot tokens that have new senses in new domain text. Instead of difficult and expensive annotation, we build a goldstandard by leveraging cheaply available parallel corpora, targeting our approach to the problem of domain adaptation for machine translation. Our system is able to achieve F-measures of as much as 80%, when applied to word types it has never seen before. Our approach is based on a large set of novel features that capture varied aspects of how words change when used in new domains. 1 Introduction As Magnini et al. (2002) observed, the domain of the text that a word occurs in is a useful signal for performing word sense disambiguation (e.g. in a text about finance, bank is likely to refer to a financial institution while in a text about geography, it is likely to refer to a river bank). However, in the classic WSD task, ambiguous word types and a set of possible senses are known in advance. In this work, we focus on the setting where we observe texts in two different domains and want to identify words in the second text that have a sense that did not appear in the first text, without any lexical knowledge in the new domain. To illustrate the task, consider the French noun rapport. In the parliament domain, this means ´etat rapport r´egime Govt. geo. state report (political) regime Medical state (mind) report diet geo. state ratio (political) regime Science geo. state ratio (political) regime report diet Movies geo. state report (political) regime diet Table 1: Examples of French words and their most frequent senses (translations) in four domains. (and is translated as) “report.” However, in moving to a medical or scientific domain, the word gains a new sense: “ratio”, which simply does not exist in the parliament domain. In a science domain, the “report” sense exists, but it is dominated about 12:1 by “ratio.” In a medical domain, the “report” sense remains dominant (about 2:1), but the new “ratio” sense appears frequently. In this paper we define a new task that we call SENSESPOTTING. The goal of this task is to identify words in a new domain monolingual text that appeared in old domain text but which have a new, previously unseen sense1. We operate under the framework of phrase sense disambiguation (Carpuat and Wu, 2007), in which we take automatically align parallel data in an old domain to generate an initial old-domain sense inventory. This sense inventory provides the set of “known” word senses in the form of phrasal translations. Concrete examples are shown in Table 1. One of our key contributions is the development of a rich set of features based on monolingual text that are indicative of new word senses. This work is driven by an application need. When machine translation (MT) systems are applied in a new domain, many errors are a result of: (1) previously unseen (OOV) source language words, or (2) source language words that appear with a new sense and which require new transla1All features, code, data and raw results are at: github. com/hal3/IntrinsicPSDEvaluation 1435 tions2 (Carpuat et al., 2012). Given monolingual text in a new domain, OOVs are easy to identify, and their translations can be acquired using dictionary extraction techniques (Rapp, 1995; Fung and Yee, 1998; Schafer and Yarowsky, 2002; Schafer, 2006; Haghighi et al., 2008; Mausam et al., 2010; Daum´e III and Jagarlamudi, 2011), or active learning (Bloodgood and Callison-Burch, 2010). However, previously seen (even frequent) words which require new translations are harder to spot. Because our motivation is translation, one significant point of departure between our work and prior related work (§3) is that we focus on word tokens. That is, we are not interested only in the question of “has this known word (type) gained a new sense?”, but the much more specific question of “is this particular (token) occurrence of this known word being used in a new sense?” Note that for both the dictionary mining setting and the active learning setting, it is important to consider words in context when acquiring their translations. 2 Task Definition Our task is defined by two data components. Details about their creation are in §5. First, we need an old-domain sense dictionary, extracted from French-English parallel text (in our case, parliamentary proceedings). Next, we need new-domain monolingual French text (we use medical text, scientific text and movie subtitle text). Given these two inputs, our challenge is to find tokens in the new-domain text that are being used in a new sense (w.r.t. the old-domain dictionary). We assume that we have access to a small amount of new domain parallel “tuning data.” From this data, we can extract a small new domain dictionary (§5). By comparing this new domain dictionary to the old domain dictionary, we can identify which words have gained new senses. In this way, we turn the SENSESPOTTING problem into a supervised binary classification problem: an example is a French word in context (in the new domain monolingual text) and its label is positive when it is being used in a sense that did not exist in the old domain dictionary. In this task, the classifier is always making predictions on words 2Sense shifts do not always demand new translations; some ambiguities are preserved across languages. E.g., fenˆetre can refer to a window of a building or on a monitor, but translates as “window” either way. Our experiments use bilingual data with an eye towards improving MT performance: we focus on words that demand new translations. outside this tuning data on word types it has never seen before! From an applied perspective, the assumption of a small amount of parallel data in the new domain is reasonable: if we want an MT system for a new domain, we will likely have some data for system tuning and evaluation. 3 Related Work While word senses have been studied extensively in lexical semantics, research has focused on word sense disambiguation, the task of disambiguating words in context given a predefined sense inventory (e.g., Agirre and Edmonds (2006)), and word sense induction, the task of learning sense inventories from text (e.g., Agirre and Soroa (2007)). In contrast, detecting novel senses has not received as much attention, and is typically addressed within word sense induction, rather than as a distinct SENSESPOTTING task. Novel sense detection has been mostly motivated by the study of language change over time. Most approaches model changes in co-occurrence patterns for word types when moving between corpora of old and modern language (Sagi et al., 2009; Cook and Stevenson, 2010; Gulordava and Baroni, 2011). Since these type-based models do not capture polysemy in the new language, there have been a few attempts at detecting new senses at the tokenlevel as in SENSESPOTTING. Lau et al. (2012) leverage a common framework to address sense induction and disambiguation based on topic models (Blei et al., 2003). Sense induction is framed as learning topic distributions for a word type, while disambiguation consists of assigning topics to word tokens. This model can interestingly be used to detect newly coined senses, which might co-exist with old senses in recent language. Bamman and Crane (2011) use parallel Latin-English data to learn to disambiguate Latin words into English senses. New English translations are used as evidence that Latin words have shifted sense. In contrast, the SENSESPOTTING task consists of detecting when senses are unknown in parallel data. Such novel sense induction methods require manually annotated datasets for the purpose of evaluation. This is an expensive process and therefore evaluation is typically conducted on a very small scale. In contrast, our SENSESPOTTING task leverages automatically word-aligned parallel corpora as a source of annotation for supervision during training and evaluation. 1436 The impact of domain on novel senses has also received some attention. Most approaches operate at the type-level, thus capturing changes in the most frequent sense of a word when shifting domains (McCarthy et al., 2004; McCarthy et al., 2007; Erk, 2006; Chan and Ng, 2007). Chan and Ng (2007) notably show that detecting changes in predominant sense as modeled by domain sense priors can improve sense disambiguation, even after performing adaptation using active learning. Finally, SENSESPOTTING has not been addressed directly in MT. There has been much interest in translation mining from parallel or comparable corpora for unknown words, where it is easy to identify which words need translations. In contrast, SENSESPOTTING detects when words have new senses and, thus, frequently a new translation. Work on active learning for machine translation has focused on collecting translations for longer unknown segments (e.g., Bloodgood and CallisonBurch (2010)). There has been some interest in detecting which phrases that are hard to translate for a given system (Mohit and Hwa, 2007), but difficulties can arise for many reasons: SENSESPOTTING focuses on a single problem. 4 New Sense Indicators We define features over both word types and word tokens. In our classification setting, each instance consists of a French word token in context. Our word type features ignore this context and rely on statistics computed over our entire new domain corpus. In contrast, our word token features consider the context of the particular instance of the word. If it were the case that only one sense existed for all word tokens of a particular type within a single domain, we would expect our word type features to be able to spot new senses without the help of the word token features. However, in fact, even within a single domain, we find that often a word type is used with several senses, suggesting that word token features may also be useful. 4.1 Type-level Features Lexical Item Frequency Features A very basic property of the new domain that we hope to capture is that word frequencies change, and such changes might be indicative of a domain shift. As such, we compute unigram log probabilities (via smoothed relative frequencies) of each word under consideration in the old domain and the new domain. We then add as features these two log probabilities as well as their difference. These are our Type:RelFreq features. N-gram Probability Features The goal of the Type:NgramProb feature is to capture the fact that “unusual contexts” might imply new senses. To capture this, we can look at the log probability of the word under consideration given its N-gram context, both according to an old-domain language model (call this ℓold ng ) and a new-domain language model (call this ℓnew ng ). However, we do not simply want to capture unusual words, but words that are unlikely in context, so we also need to look at the respective unigram log probabilities: ℓold ug and ℓnew ug . From these four values, we compute corpuslevel (and therefore type-based) statistics of the new domain n-gram log probability (ℓnew ng , the difference between the n-gram probabilities in each domain (ℓnew ng −ℓold ng ), the difference between the n-gram and unigram probabilities in the new domain (ℓnew ng −ℓnew ug ), and finally the combined difference: ℓnew ng −ℓnew ug + ℓold ug −ℓold ng ). For each of these four values, we compute the following type-based statistics over the monolingual text: mean, standard deviation, minimum value, maximum value and sum. We use trigram models. Topic Model Feature The intuition behind the topic model feature is that if a word’s distribution over topics changes when moving into a new domain, it is likely to also gain a new sense. For example, suppose that in our old domain, the French word enceinte is only used with the sense “wall,” but in our new domain, enceinte may have senses corresponding to either “wall” or to “pregnant.” We would expect to see this reflected in enceinte’s distribution over topics: the topic that places relatively high probabilities on words such as “b´eb´e” (English “baby”) and enfant (English “child”) will also place a high probability on enceinte when trained on new domain data. In the old domain, however, we would not expect a similar topic (if it exists) to give a high probability to enceinte. Based on this intuition, for all words w, where To and Tn are the set of old and new topics and Po and Pn are the old and new distributions defined over them, respectively, and cos is the cosine similarity between a pair of topics, we define the feature Type:TopicSim: P t∈Tn,t′∈To Pn(t|w)Po(t′|w) cos(t, t′). For a word w, the feature value will be high if, for each new domain topic t that places high probability on w, there is an old domain topic t′ that 1437 is similar to t and also places a high probability on w. Conversely, if no such topic exists, the score will be low, indicating the word has gained a new sense. We use the online LDA (Blei et al., 2003; Hoffman et al., 2010), implemented in http://hunch.net/˜vw/ to compute topics on the two domains separately. We use 100 topics. Context Feature It is expected that words acquiring new senses will tend to neighbor different sets of words (e.g. different arguments, prepositions, parts of speech, etc.). Thus, we define an additional type level feature to be the ratio of the number of new domain n-grams (up to length three) that contain word w and which do not appear in the old domain to the total number of new domain n-grams containing w. With Nw indicating the set of n-grams in the new domain which contain w, Ow indicating the set of n-grams in the old domain which contain w, and |Nw −Ow| indicating the n-grams which contain w and appear in the new but not the old domain, we define Type:Contextas |Nw−Ow| |Nw| . We do not count n-grams containing OOVs, as they may simply be instances of applying the same sense of a word to a new argument 4.2 Token-level Features N-gram Probability Features Akin to the Ngram probability features at the type level (namely, Token:NgramProb), we compute the same values at the token level (new/old domain and unigram/trigram). Instead of computing statistics over the entire monolingual corpus, we use the instantaneous values of these features for the token under consideration. The six features we construct are: unigram (and trigram) log probabilities in the old domain, the new domain, and their difference. Context Features Following the type-level ngram feature, we define features for a particular word token based on its n-gram context. For token wi, in position i in a given sentence, we consider its context words in a five word window: wi−2, wi−1, wi+1, and wi+2. For each of the four contextual words in positions p = {−2, −1, 1, 2}, relative to i, we define the following feature, Token:CtxCnt: log(cwp) where cwp is the number of times word wp appeared in position p relative to wi in the OLD-domain data. We also define a single feature which is the percent of the four contextual words which had been seen in the OLDdomain data, Token:Ctx%. Token-Level PSD Features These features aim to capture generalized characteristics of a context. Towards this end, first, we pose the problem as a phrase sense disambiguation (PSD) problem over the known sense inventory. Given a source word in a context, we train a classifier to predict the most likely target translation. The ground truth labels (target translation for a given source word) for this classifier are generated from the phrase table of the old domain data. We use the same set of features as in Carpuat and Wu (2007). Second, given a source word s, we use this classifier to compute the probability distribution of target translations p(t|s)  . Subsequently, we use this probability distribution to define new features for the SENSESPOTTING task. The idea is that, if a word is used in one of the known senses then its context must have been seen previously and hence we hope that the PSD classifier outputs a spiky distribution. On the other hand, if the word takes a new sense then hopefully it is used in an unseen context resulting in the PSD classifier outputting an uniform distribution. Based on this intuition, we add the following features: MaxProb is the maximum probability of any target translation: maxt p(t|s). Entropy is the entropy of the probability distribution: −P t p(t|s) log p(t|s). Spread is the difference between maximum and minimum probabilities of the probability distribution: maxt p(t|s) −mint p(t|s)  . Confusion is the uncertainty in the most likely prediction given the source token: mediantp(t|s) maxt p(t|s) . The use of median in the numerator rather than the second best is motivated by the observation that, in most cases, top ranked translations are of the same sense but differ in morphology. We train the PSD classifier in two modes: 1) a single global classifier that predicts the target translation given any source word; 2) a local classifier for each source word. When training the global PSD classifier, we include some lexical features that depend on the source word. For both modes, we use real valued and binned features giving rise to four families of features Token:G-PSD, Token:G-PSDBin, Token:L-PSD and Token:L-PSDBin. Prior vs. Posterior PSD Features When the PSD classifier is trained in the second mode, i.e. one classifier per word type, we can define additional features based on the prior (with out the word context) and posterior (given the word’s context) probability distributions output by the classifier, i.e. pprior(t|s) and ppost.(t|s) respec1438 Domain Sentences Lang Tokens Types Hansard 8,107,356 fr 161,695,309 191,501 en 144,490,268 186,827 EMEA 472,231 fr 6,544,093 34,624 en 5,904,296 29,663 Science 139,215 fr 4,292,620 117,669 en 3,602,799 114,217 Subs 19,239,980 fr 154,952,432 361,584 en 174,430,406 293,249 Table 2: Basic characteristics of the parallel data. tively. We compute the following set of features referred to as Token:PSDRatio: SameMax checks if both the prior and posterior distributions have the same translation as the most likely translation. SameMin is same as the above feature but check if the least likely translation is same. X-OR MinMax is the exclusiveOR of SameMax and SameMin features. KL is the KL-divergence between the two distributions. Since KL-divergence is asymmetric, we use KL(pprior||ppost.) and KL(ppost.||pprior). MaxNorm is the ratio of maximum probabilities in prior and posterior distributions. SpreadNorm is the ratio of spread of the prior and posterior distributions, where spared is the difference between maximum and minimum probabilities of the distribution as defined earlier. ConfusionNorm is the ratio of confusion of the prior and posterior distributions, where confusion is defined as earlier. 5 Data and Gold Standard The first component of our task is a parallel corpus of old domain data, for which we use the French-English Hansard parliamentary proceedings (http://www.parl.gc.ca). From this, we extract an old domain sense dictionary, using the Moses MT framework (Koehn et al., 2007). This defines our old domain sense dictionary. For new domains, we use three sources: (1) the EMEA medical corpus (Tiedemann, 2009), (2) a corpus of scientific abstracts, and (3) a corpus of translated movie subtitles (Tiedemann, 2009). Basic statistics are shown in Table 2. In all parallel corpora, we normalize the English for American spelling. To create the gold standard truth, we followed a lexical sample apparoach and collected a set of 300 “representative types” that are interesting to evaluate on, because they have multiple senses within a single domain or whose senses are likely to change in a new domain. We used a semi-automatic approach to identify representative types. We first used the phrase table from Parallel Repr. Repr. % New Sents fr-tok Types Tokens Sense EMEA 24k 270k 399 35,266 52.0% Science 22k 681k 425 8,355 24.3% Subs 36k 247k 388 22,598 43.4% Table 3: Statistics about representative words and the size of the development sets. The columns show: the total amount of parallel development data (# of sentences and tokens in French), # of representative types that appear in this corpus, the corresponding # of tokens, and the percentage of these tokens that correspond to “new senses.” the Moses output to rank phrases in each domain using TF-IDF scores with Okapi BM25 weighting. For each of the three new domains (EMEA, Science, and Subs), we found the intersection of phrases between the old and the new domain. We then looked at the different translations that each had in the phrase table and a French speaker selected a subset that have multiple senses.3 In practice, we limited our set almost entirely to source words, and included only a single multiword phrase, vue des enfants, which usually translates as “for children” in the old domain but almost always translates as “sight of children” in the EMEA domain (as in “...should be kept out of the sight of children”). Nothing in the way we have defined, approached, or evaluated the SENSESPOTTING task is dependent on the use of representative words instead of longer representative phrases. We chose to consider mostly source language words for simplicity and because it was easier to identify good candidate words. In addition to the manually chosen words, we also identified words where the translation with the highest lexical weight varied in different domains, with the intuition being that are the words that are likely to have acquired a new sense. The top 200 words from this were added to the manually selected representative words to form a list of 450. Table 3 shows some statistics about these words across our three test domains. 6 Experiments 6.1 Experimental setup Our goal in evaluation is to be able to understand what our approach is realistically capable of. One challenge is that the distribution 3In order to create the evaluation data, we used both sides of the full parallel text; we do not use the English side of the parallel data for actually building systems. 1439 of representative words is highly skewed.4 We present results in terms of area under the ROC curve (AUC),5 micro-averaged precision/recall/fmeasure and macro-averaged precision/recall/fmeasure. For macro-averaging, we compute a single confusion matrix over all the test data and determining P/R/F from that matrix. For microaveraging, we compute a separate confusion matrix for each word type on the French side, compute P/R/F for each of these separately, and then average the results. (Thus, micro-F is not a function of micro-P and micro-R.) The AUC and macro-averaged scores give a sense of how well the system is doing on a type-level basis (essentially weighted by type frequency), while the micro-averaged scores give a sense as to how well the system is doing on individual types, not taking into account their frequencies. For most of our results, we present standard deviations to help assess significance (±2σ is roughly a 90% confidence interval). For our results, in which we use new-domain training data, we compute these results via 16-fold cross validation. The folds are split across types so the system is never being tested on a word type that it has seen before. We do this because it more closely resembles our application goals. We do 16-fold for convenience, because we divide the data into binary folds recursively (thus having a power-of-two is easier), with an attempt to roughly balance the size of the training sets in each fold (this is tricky because of the skewed nature of the data). This entire 16-fold cross-validation procedure is repeated 10 times and averages and standard deviations are over the 160 replicates. We evaluate performance using our type-level features only, TYPEONLY, our token-level features only, TOKENONLY, and using both our type and our token level features, ALLFEATURES. We compare our results with two baselines: RANDOM and CONSTANT. RANDOM predicts new-sense or not-new-sense randomly and with equal probability. CONSTANT always predicts new-sense, achieving 100% recall and a macrolevel precision that is equal to the percent of representative words which do have a new sense, modulo cross-validation splits (see Table 3). Addi4The most frequent (voie) appears 3881 times; there are 60 singleton words on average across the three new domains. 5AUC is the probability that the classifier will assign a higher score to a randomly chosen positive example than to a randomly chosen negative example (Wikipedia, 2013). tionally, we compare our results with a type-level oracle, TYPEORACLE. For all tokens of a given word type, the oracle predicts the majority label (new-sense or not-new-sense) for that word type. These results correspond to an upper bound for the TYPEONLY experiments. 6.2 Classification Setup For all experiments, we use a linear classifier trained by stochastic gradient descent to optimize logistic loss. We also did some initial experiments on development data using boosted decision trees instead and other loss functions (hinge loss, squared loss), but they never performed as well. In all cases, we perform 20 passes over the training data, using development data to perform early stopping (considered at the end of each pass). We also use development data to tune a regularizer (either ℓ1 or ℓ2) and its regularization weight.6 Finally, all real valued features are automatically bucketed into 10 consecutive buckets, each with (approximately) the same number of elements. Each learner uses a small amount of development data to tune a threshold on scores for predicting new-sense or not-a-new-sense, using macro F-measure as an objective. 6.3 Result Summary Table 4 shows our results on the SENSESPOTTING task. Classifiers based on the features that we defined outperform both baselines in all macro-level evaluations for the SENSESPOTTING task. Using AUC as an evaluation metric, the TOKENONLY, TYPEONLY, and ALLFEATURES models performed best on EMEA, Science, and Subtitles data, respectively. Our token-level features perform particularly poorly on the Science and Subtitles data. Although the model trained on only those features achieves reasonable precision (72.59 and 70.00 on Science and Subs, respectively), its recall is very low (20.41 and 35.15), indicating that the model classifies many new-sense words as not-new-sense. Most of our token-level features capture the intuition that when a word token appears in new or infrequent contexts, it is likely to have gained a new sense. Our results indicate that this intuition was more fruitful for EMEA than for Science or Subs. In contrast, the type-only features (TYPEONLY) 6We use http://hunch.net/˜vw/ version 7.1.2, and run it with the following arguments that affect learning behavior: --exact adaptive norm --power t 0.5 1440 Macro Micro AUC P R F P R F EMEA RANDOM 50.34 ± 0.60 51.24 ± 0.59 50.09 ± 1.18 50.19 ± 0.75 47.04 ± 0.60 56.07 ± 1.99 37.27 ± 0.91 CONSTANT 50.00 ± 0.00 50.99 ± 0.00 100.0 ± 0.00 67.09 ± 0.00 45.80 ± 0.00 100.0 ± 0.00 52.30 ± 0.00 TYPEONLY 55.91 ± 1.13 69.76 ± 3.45 43.13 ± 1.42 41.61 ± 1.07 77.92 ± 2.04 50.12 ± 2.35 31.26 ± 0.63 TYPEORACLE 88.73 ± 0.00 87.32 ± 0.00 86.76 ± 0.00 87.04 ± 0.00 90.01 ± 0.00 67.46 ± 0.00 59.39 ± 0.00 TOKENONLY 78.80 ± 0.52 69.83 ± 1.59 75.58 ± 2.61 69.40 ± 1.92 59.03 ± 1.70 62.53 ± 1.66 43.39 ± 0.94 ALLFEATURES 79.60 ± 1.20 68.11 ± 1.19 79.84 ± 2.27 71.64 ± 1.83 55.28 ± 1.11 71.50 ± 1.62 46.83 ± 0.62 Science RANDOM 50.18 ± 0.78 24.48 ± 0.57 50.32 ± 1.33 32.92 ± 0.79 46.99 ± 0.51 60.32 ± 1.06 34.72 ± 1.03 CONSTANT 50.00 ± 0.00 24.34 ± 0.00 100.0 ± 0.00 39.15 ± 0.00 44.39 ± 0.00 100.0 ± 0.00 50.44 ± 0.00 TYPEONLY 77.06 ± 1.23 66.07 ± 2.80 36.28 ± 4.10 34.50 ± 4.06 84.97 ± 0.82 36.81 ± 2.33 24.22 ± 1.70 TYPEORACLE 88.76 ± 0.00 78.43 ± 0.00 69.29 ± 0.00 73.54 ± 0.00 84.19 ± 0.00 67.41 ± 0.00 52.67 ± 0.00 TOKENONLY 66.62 ± 0.47 60.50 ± 3.11 28.05 ± 2.06 30.81 ± 2.75 76.21 ± 1.78 36.57 ± 2.23 24.68 ± 1.36 ALLFEATURES 73.91 ± 0.66 50.59 ± 2.08 60.60 ± 2.04 47.54 ± 1.52 66.72 ± 1.19 62.30 ± 1.36 40.22 ± 1.03 Subs RANDOM 50.26 ± 0.69 42.47 ± 0.60 50.17 ± 0.84 45.68 ± 0.68 52.18 ± 1.32 54.63 ± 2.01 39.87 ± 2.10 CONSTANT 50.00 ± 0.00 42.51 ± 0.00 100.0 ± 0.00 59.37 ± 0.00 50.63 ± 0.00 100.0 ± 0.00 58.67 ± 0.00 TYPEONLY 67.16 ± 0.73 76.41 ± 1.51 31.91 ± 3.15 36.37 ± 2.58 90.03 ± 0.61 34.78 ± 1.12 26.20 ± 0.61 TYPEORACLE 81.35 ± 0.00 83.12 ± 0.00 70.23 ± 0.00 76.12 ± 0.00 90.62 ± 0.00 52.37 ± 0.00 44.43 ± 0.00 TOKENONLY 63.30 ± 0.99 63.17 ± 2.31 45.38 ± 2.07 43.30 ± 1.29 76.38 ± 1.68 49.70 ± 1.76 37.92 ± 1.20 ALLFEATURES 69.26 ± 0.60 63.48 ± 1.77 56.22 ± 2.66 52.78 ± 1.96 67.55 ± 0.83 62.18 ± 1.45 43.85 ± 0.90 Table 4: Complete SENSESPOTTING results for all domains. The scores are from cross-validation on a single domain; in all cases, higher is better. Two standard deviations of performance over the crossvalidation are shown in small type. For all domains and metrics, the highest (not necessarily statistically significant) non-oracle results are bolded. are relatively weak for predicting new senses on EMEA data but stronger on Subs (TYPEONLY AUC performance is higher than both baselines) and even stronger on Science data (TYPEONLY AUC and f-measure performance is higher than both baselines as well as the ALLFEATURESmodel). In our experience with the three datasets, we know that the Science data, which contains abstracts from a wide variety of scientific disciplines, is the most diverse, followed by the Subs data, and then EMEA, which mostly consists of text from drug labels and tends to be quite repetitive. Thus, it makes sense that type-level features would be the most informative for the least homogeneous dataset. Representative words in scientific text are likely to appear in variety of contexts, while in the EMEA data they may only appear in a few, making it easier to contrast them with the distributions observed in the old domain data. For all domains, in micro-level evaluation, our models fail to outperform the CONSTANT baseline. Recall that the micro-level evaluation computes precision, recall, and f-measure for all word tokens of a given word type and then averages across word types. We observe that words that are less frequent in both the old and the new domains are more likely to have a new sense than more frequent words, which causes the CONSTANT baseline to perform reasonably well. In contrast, it is more difficult for our models to make good predictions for less frequent words. A low frequency in the new domain makes type level features (estimated over only a few instances) noisy and unreliable. Similarly, a low frequency in the old domain makes the our token level features, which all contrast with old domain instances of the word type. 6.4 Feature Ablation In the previous section, we observed that (with one exception) both Type-level and Token-level features are useful in our task (in some cases, essential). In this section, we look at finer-grained feature distinctions through a process of feature ablation. In this setting, we begin with all features in a model and remove one feature at a time, always removing the feature that hurts performance least. For these experiments, we determine which feature to remove using AUC. Note that we’re actually able to beat (by 2-4 points AUC) the scores from Table 4 by removing features! The results here are somewhat mixed. In EMEA and Science, one can actually get by (according to AUC) with very few features: just two (Type:NgramProband Type:Context) are sufficient to achieve optimal AUC scores. To get higher Macro-F scores requires nearly all the features, though this is partially due to the choice of 1441 EMEA AUC MacF ALLFEATURES 79.60 71.64 –Token:L-PSDBin 77.09 70.50 –Type:RelFreq 78.43 72.19 –Token:G-PSD 79.66 72.11 –Type:Context 79.66 72.45 –Token:Ctx% 78.91 73.37 –Type:TopicSim 78.05 71.33 –Token:CtxCnt 76.90 71.72 –Token:L-PSD 76.03 73.35 –Type:NgramProb 73.32 69.54 –Token:G-PSDBin 74.41 69.76 –Token:NgramProb 69.78 68.89 –Token:PSDRatio 48.38 3.45 Science AUC MacF ALLFEATURES 73.91 47.54 –Token:L-PSDBin 76.26 53.69 –Token:G-PSD 77.04 53.56 –Token:G-PSDBin 77.44 54.54 –Token:L-PSD 77.85 56.05 –Token:PSDRatio 77.92 57.34 –Token:CtxCnt 77.85 54.42 –Type:Context 78.17 55.45 –Token:Ctx% 78.06 55.04 –Type:TopicSim 77.83 54.57 –Token:NgramProb 76.98 51.02 –Type:RelFreq 74.25 49.57 –Type:NgramProb 50.00 0.00 Subs AUC MacF ALLFEATURES 69.26 52.78 –Type:NgramProb 69.13 53.33 –Token:G-PSDBin 70.23 54.72 –Token:CtxCnt 71.23 58.35 –Token:L-PSDBin 72.07 57.85 –Token:G-PSD 72.17 57.33 –Type:TopicSim 72.31 58.41 –Token:Ctx% 72.17 56.17 –Token:NgramProb 71.35 59.26 –Token:PSDRatio 70.33 46.88 –Token:L-PSD 69.05 53.31 –Type:RelFreq 65.25 48.22 –Type:Context 50.00 0.00 Table 5: Feature ablation results for all three corpora. Selection criteria is AUC, but Macro-F is presented for completeness. Feature selection is run independently on each of the three datasets. The features toward the bottom were the first selected. AUC Macro-F Micro-F EMEA TYPEONLY 71.43 ± 0.94 52.62 ± 3.41 38.67 ± 1.35 TOKENONLY 73.75 ± 1.11 67.77 ± 4.18 45.49 ± 3.96 ALLFEATURES 72.19 ± 4.07 67.26 ± 7.88 49.29 ± 3.55 XV-ALLFEATURES 79.60 ± 1.20 71.64 ± 1.83 46.83 ± 0.62 Science TYPEONLY 75.19 ± 0.89 51.53 ± 2.55 37.14 ± 4.41 TOKENONLY 71.24 ± 1.45 47.27 ± 1.11 40.48 ± 1.84 ALLFEATURES 74.14 ± 0.93 48.86 ± 3.94 43.20 ± 3.16 XV-ALLFEATURES 73.91 ± 0.66 47.54 ± 1.52 40.22 ± 1.03 Subs TYPEONLY 60.90 ± 1.47 39.21 ± 14.78 24.77 ± 2.78 TOKENONLY 62.00 ± 1.16 49.74 ± 6.30 42.95 ± 3.92 ALLFEATURES 60.12 ± 2.11 50.16 ± 8.63 38.56 ± 5.20 XV-ALLFEATURES 69.26 ± 0.60 52.78 ± 1.96 43.85 ± 0.90 Table 6: Cross-domain test results on the SENSESPOTTING task. Two standard deviations are shown in small type. Only AUC, Macro-F and Micro-F are shown for brevity. AUC as the measure on which to ablate. It’s quite clear that for Science, all the useful information is in the type-level features, a result that echoes what we saw in the previous section. While for EMEA and Subs, both type- and token-level features play a significant role. Considering the six most useful features in each domain, the ones that pop out as frequently most useful are the global PSD features, the ngram probability features (either type- or token-based), the relative frequency features and the context features. 6.5 Cross-Domain Training One disadvantage to the previous method for evaluating the SENSESPOTTING task is that it requires parallel data in a new domain. Suppose we have no parallel data in the new domain at all, yet still want to attack the SENSESPOTTING task. One option is to train a system on domains for which we do have parallel data, and then apply it in a new domain. This is precisely the setting we explore in this section. Now, instead of performing cross-validation in a single domain (for instance, Science), we take the union of all of the training data in the other domains (e.g., EMEA and Subs), train a classifier, and then apply it to Science. This classifier will almost certainly be worse than one trained on NEW (Science) but does not require any parallel data in that domain. (Hyperparameters are chosen by development data from the OLD union.) The results of this experiment are shown in Table 6. We include results for TOKENONLY, TYPEONLY and ALLFEATURES; all of these are trained in the cross-domain setting. To ease comparison to the results that do not suffer from domain shift, we also present “XV-ALLFEATURES”, which are results copied from Table 4 in which parallel data from NEW is used. Overall, there is a drop of about 7.3% absolute in AUC, moving from XV-ALLFEATURES to ALLFEATURES, including a small improvement in Science (likely because Science is markedly smaller than Subs, and “more difficult” than EMEA with many word types). 6.6 Detecting Most Frequent Sense Changes We define a second, related task: MOSTFREQSENSECHANGE. In this task, instead of predicting if a given word token has a sense which is brand new with respect to the old domain, we predict whether it is being used with a a sense which is not the one that was observed most frequently in the old domain. In our EMEA, Science, and Subtitles data, 68.2%, 48.3%, and 69.6% of word tokens’ predominant sense changes. 1442 6 12 25 50 100 .32 .40 .50 .63 Science Macro−F % of data 6 12 25 50 100 .40 .50 .63 .79 EMEA % of data 6 12 25 50 100 .40 .50 .63 Subs % of data TypeOracle Random AllFeatures Figure 1: Learning curves for the three domains. X-axis is percent of data used, Y-axis is Macro-F score. Both axes are in log scale to show the fast rate of growth. A horizontal bar corresponding to random predictions, and the TYPEORACLE results are shown for comparison. AUC Macro-F Micro-F EMEA RANDOM 50.54 ± 0.41 58.23 ± 0.34 49.69 ± 0.85 CONSTANT 50.00 ± 0.00 82.15 ± 0.00 74.43 ± 0.00 TYPEONLY 55.05 ± 1.00 67.45 ± 1.35 65.72 ± 0.59 TYPEORACLE 88.36 ± 0.00 90.64 ± 0.00 77.46 ± 0.00 TOKENONLY 66.42 ± 1.07 80.27 ± 0.50 68.96 ± 0.58 ALLFEATURES 58.64 ± 3.45 80.57 ± 0.45 69.40 ± 0.51 Science RANDOM 50.13 ± 0.78 49.05 ± 0.82 48.19 ± 1.47 CONSTANT 50.00 ± 0.00 65.21 ± 0.00 73.22 ± 0.00 TYPEONLY 68.32 ± 1.05 54.70 ± 2.35 57.04 ± 1.52 TYPEORACLE 91.41 ± 0.00 86.71 ± 0.00 74.26 ± 0.00 TOKENONLY 68.49 ± 0.59 62.76 ± 0.89 64.40 ± 1.08 ALLFEATURES 68.31 ± 0.93 64.73 ± 1.93 67.20 ± 1.65 Subs RANDOM 50.27 ± 0.27 56.93 ± 0.29 50.93 ± 1.11 CONSTANT 50.00 ± 0.00 79.96 ± 0.00 76.26 ± 0.00 TYPEONLY 60.36 ± 0.90 67.78 ± 1.98 61.58 ± 1.78 TYPEORACLE 82.16 ± 0.00 87.96 ± 0.00 73.87 ± 0.00 TOKENONLY 59.49 ± 1.04 77.79 ± 0.82 73.51 ± 0.68 ALLFEATURES 54.97 ± 0.89 77.30 ± 1.58 72.29 ± 1.68 Table 7: Cross-validation results on the MOSTFREQSENSECHANGE task. Two standard deviations are shown in small type. We use the same set of features and learning framework to generate and evaluate models for this task. While the SENSESPOTTING task has MT utility in suggesting which new domain words demand a new translation, the MOSTFREQSENSECHANGE task has utility in suggesting which words demand a new translation probability distribution when shifting to a new domain. Table 7 shows the results of our MOSTFREQSENSECHANGE task experiments. Results on the MOSTFREQSENSECHANGE task are somewhat similar to those for the SENSESPOTTING task. Again, our models perform better under a macro-level evaluation than under a micro-level evaluation. However, in contrast to the SENSESPOTTING results, token-level features perform quite well on their own for all domains. It makes sense that our token level features have a better chance of success on this task. The important comparison now is between a new domain token in context and the majority of the old domain tokens of the same word type. This comparison is likely to be more informative than when we are equally interested in identifying overlap between the current token and any old domain senses. Like the SENSESPOTTING results, when doing a microlevel evaluation, our models do not perform as well as the CONSTANT baseline, and, as before, we attribute this to data sparsity. 6.7 Learning Curves All of the results presented so far use classifiers trained on instances of representative types (i.e. “representative tokens”) extracted from fairly large new domain parallel corpora (see Table 3), consisting of between 22 and 36 thousand parallel sentences, which yield between 8 and 35 thousand representative tokens. Although we expect some new domain parallel tuning data to be available in most MT settings, we would like to know how many representative types are required to achieve good performance on the SENSESPOTTING task. Figure 6.5 shows learning curves over the number of representative tokens that are used to train SENSESPOTTING classifiers. In fact, only about 25-50% of the data we used is really necessary to achieve the performance observed before. Acknowledgments We gratefully acknowledge the support of the JHU summer workshop program (and its funders), the entire DAMT team (http://hal3.name/DAMT/), Sanjeev Khudanpur, support from the NRC for Marine Carpuat, as well as DARPA CSSG Grant D11AP00279 for Hal Daum´e III and Jagadeesh Jagarlamudi. 1443 References E. Agirre and P.G. Edmonds. 2006. Word Sense Disambiguation: Algorithms and Applications. Text, Speech, and Language Technology Series. Springer Science+Business Media B.V. Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 7–12. David Bamman and Gregory Crane. 2011. Measuring historical word sense variation. In Proceedings of the 2011 Joint International Conference on Digital Libraries (JCDL 2011), pages 1–10. D. Blei, A. Ng, and M. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research (JMLR), 3. Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 854–864, Uppsala, Sweden, July. Association for Computational Linguistics. Marine Carpuat and Dekai Wu. 2007. Improving Statistical Machine Translation using Word Sense Disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007), pages 61– 72, Prague, June. Marine Carpuat, Hal Daum´e III, Alexander Fraser, Chris Quirk, Fabienne Braune, Ann Clifton, Ann Irvine, Jagadeesh Jagarlamudi, John Morgan, Majid Razmara, Aleˇs Tamchyna, Katharine Henry, and Rachel Rudinger. 2012. Domain adaptation in machine translation: Final report. In 2012 Johns Hopkins Summer Workshop Final Report. Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In Proceedings of the Association for Computational Linguistics. Paul Cook and Suzanne Stevenson. 2010. Automatically identifying changes in the semantic orientation of words. In Proceedings of the 7th International Conference on Language Resources and Evaluation, pages 28–34, Valletta, Malta. Hal Daum´e III and Jagadeesh Jagarlamudi. 2011. Domain adaptation for machine translation by mining unseen words. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Katrin Erk. 2006. Unknown word sense detection as outlier detection. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 128–135. Pascale Fung and Lo Yuen Yee. 1998. An IR approach for translating new words from nonparallel, comparable texts. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the google books ngram corpus. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 67–71, Edinburgh, UK, July. Association for Computational Linguistics. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Matthew Hoffman, David Blei, and Francis Bach. 2010. Online learning for latent dirichlet allocation. In Advances in Neural Information Processing Systems (NIPS). Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, Timothy Baldwin, and Lexical Computing. 2012. Word sense induction for novel sense detection. In Proceedings of the 13th Conference of the European Chapter of the Association for computational Linguistics (EACL 2012), pages 591–601. Citeseer. Bernardo Magnini, Carlo Strapparava, Giovanni Pezzulo, and Alfio Gliozzo. 2002. The role of domain information in word sense disambiguation. Natural Language Engineering, 8(04):359–373. Mausam, Stephen Soderland, Oren Etzioni, Daniel S. Weld, Kobi Reiter, Michael Skinner, Marcus Sammer, and Jeff Bilmes. 2010. Panlingual lexical translation via probabilistic inference. Artificial Intelligence, 174:619–637, June. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 279. Association for Computational Linguistics. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2007. Unsupervised acquisition of predominant word senses. Computational Linguistics, 33(4):553–590. 1444 Behrang Mohit and Rebecca Hwa. 2007. Localization of difficult-to-translate phrases. In proceedings of the 2nd ACL Workshop on Statistical Machine Translations. Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the Conference of the Association for Computational Linguistics (ACL). Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2009. Semantic density analysis: Comparing word meaning across time and phonetic space. In Proceedings of the EACL 2009 Workshop on GEMS: GEometical Models of Natural Language Semantics, pages 104– 111, Athens, Greece, March. Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In Proceedings of the Conference on Natural Language Learning (CoNLL). Charles Schafer. 2006. Translation Discovery Using Diverse Similarity Measures. Ph.D. thesis, Johns Hopkins University. J¨org Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing (RANLP). Wikipedia. 2013. Receiver operating characteristic. http://en.wikipedia.org/wiki/Receiver_ operating_characteristic#Area_Under_ the_Curve, February. 1445
2013
141
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1446–1455, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics BRAINSUP: Brainstorming Support for Creative Sentence Generation G¨ozde ¨Ozbal FBK-irst Trento, Italy [email protected] Daniele Pighin Google Inc. Z¨urich, Switzerland [email protected] Carlo Strapparava FBK-irst Trento, Italy [email protected] Abstract We present BRAINSUP, an extensible framework for the generation of creative sentences in which users are able to force several words to appear in the sentences and to control the generation process across several semantic dimensions, namely emotions, colors, domain relatedness and phonetic properties. We evaluate its performance on a creative sentence generation task, showing its capability of generating well-formed, catchy and effective sentences that have all the good qualities of slogans produced by human copywriters. 1 Introduction A variety of real-world scenarios involve talented and knowledgable people in a time-consuming process to write creative, original sentences generated according to well-defined requisites. For instance, to advertise a new product it could be desirable to have its name appearing in a punchy sentence together with some keywords relevant for marketing, e.g. “fresh”, or “thirst” for the advertisement of a drink. Besides, it could be interesting to characterize the sentence with respect to a specific color, like “blue” to convey the idea of freshness, or to a color more related to the brand of the company, e.g. “red” for a new Ferrari. Moreover, making the slogan evoke “joy” or “satisfaction” could make the advertisement even more catchy for customers. On the other hand, there are many examples of provocative slogans in which copywriters try to impress their readers by suscitating strong negative feelings, as in the case of antismoke campaigns (e.g., “there are cooler ways to die than smoking” or “cancer cures smoking”), or the famous beer motto “Guinness is not good for you”. As another scenario, creative sentence generation is also a useful teaching device. For example, the keyword or linkword method used for second language learning links the translation of a foreign (target) word to one or more keywords in the native language which are phonologically or lexically similar to the target word (Sagarra and Alba, 2006). To illustrate, for teaching the Italian word “tenda”, which means “curtain” in English, the learners are asked to imagine “rubbing a tender part of their leg with a curtain”. These words should co-occur in the same sentence, but constructing such sentences by hand can be a difficult and very time-consuming process. ¨Ozbal and Strapparava (2011), who attempted to automate the process, conclude that the inability to retrieve from the web a good sentence for all cases is a major bottleneck. Although state of the art computational models of creativity often produce remarkable results, e.g., Manurung et al. (2008), Greene et al. (2010), Guerini et al. (2011), Colton et al. (2012) just to name a few, to our best knowledge there is no attempt to develop an unified framework for the generation of creative sentences in which users can control all the variables involved in the creative process to achieve the desired effect. In this paper, we advocate the use of syntactic information to generate creative utterances by describing a methodology that accounts for lexical and phonetic constraints and multiple semantic dimensions at the same time. We present BRAINSUP, an extensible framework for creative sentence generation in which users can control all the parameters of the creative process, thus generating sentences that can be used for practical applications. First, users can define a set of keywords which must appear in the final sentence. Second, they can slant the output towards a spe1446 Domain Keywords BRAINSUP output examples coffee waking, cup Between waking and doing there is a wondrous cup. coke drink, exhaustion The physical exhaustion wants the dark drink. health day, juice, sunshine With juice and cereal the normal day becomes a summer sunshine. beauty kiss, lips Passionate kiss, perfect lips. – Lips and eyes want the kiss. mascara drama, lash Lash your drama to the stage. – A mighty drama, a biting lash. pickle crunch, bite Crunch your bite to the top. – Crunch of a savage byte. – A large byte may crunch a little attention. soap skin, love, touch A touch of love is worth a fortune of skin. – The touch of froth is the skin of love. – A skin of water is worth a touch of love. Table 1: A selection of sentences automatically generated by BRAINSUP for specific domains. cific emotion, color or domain. At the same time, they can require a sentence to include desired phonetic properties, such as rhymes, alliteration or plosives. The combination of these features allows for the generation of potentially catchy and memorable sentences by establishing connections between linguistic, emotional (LaBar and Cabeza, 2006), echoic and visual (Borman et al., 2005) memory, as exemplified by the system outputs showcased in Table 1. Other creative dimensions can easily be plugged in, due to the inherently modular structure of the system. BRAINSUP supports the creative process by greedily exploring a huge solution space to produce completely novel utterances responding to user requisites. It exploits syntactic constraints to dramatically cut the size of the search space, thus making it possible to focus on the creative aspects of sentence generation. 2 Related work Research in creative language generation has bloomed in recent years. In this section, we provide a necessarily succint overview of a selection of the studies that most heavily inspired and influenced the development of BRAINSUP. Humor generators are a notable class of systems exploring new venues in computational creativity (Binsted and Ritchie, 1997; McKay, 2002; Manurung et al., 2008). Valitutti et al. (2009) present an interactive system which generates humorous puns obtained through variation of familiar expressions with word substitution. The variation takes place considering the phonetic distance and semantic constraints such as semantic similarity, semantic domain opposition and affective polarity difference. Possibly closer to slogan generation, Guerini et al. (2011) slant existing textual expressions to obtain more positively or negatively valenced versions using WordNet (Miller, 1995) semantic relations and SentiWordNet (Esuli and Sebastiani, 2006) annotations. Stock and Strapparava (2006) generate acronyms based on lexical substitution via semantic field opposition, rhyme, rythm and semantic relations. The model is limited to the generation of noun phrases. Poetry generation systems face similar challenges to BRAINSUP as they struggle to combine semantic, lexical and phonetic features in a unified framework. Greene et al. (2010) describe a model for poetry generation in which users can control meter and rhyme scheme. Generation is modeled as a cascade of weighted Finite State Transducers that only accept strings conforming to the desired rhyming scheme. Toivanen et al. (2012) attempt to generate novel poems by replacing words in existing poetry with morphologically compatible words that are semantically related to a target domain. Content control and the inclusion of phonetic features are left as future work and syntactic information is not taken into account. The Electronic Text Composition project1 is a corpus based approach to poetry generation which recursively combines automatically generated linguistic constituents into grammatical sentences. Colton et al. (2012) propose another data-driven approach to poetry generation based on simile transformation. The mood and theme of the poems are influenced by daily news. Constraints about phonetic properties of the selected words or their frequencies can be enforced during retrieval. Unlike these examples, BRAINSUP makes heavy use of syntactic information to enforce well-formed sentences and to constraint the search for a solution, and provides an extensible framework in which various forms of linguistic creativity can easily be incorporated. Several slogan generators are available on the web2, but their capabilities are very limited as they can only replace single words or word sequences within existing slogan. This often results in syntactically incorrect outputs. Furthermore, they do not allow for other forms of user control. 1http://slought.org/content/11199 2E.g.: http://www.procato.com/slogan+ generator, http://www.sloganizer.net/en/, http://www.sloganmania.com/index.htm. 1447 3 Architecture of BRAINSUP To effectively support the creative process with useful suggestions, we must be able to generate sentences conforming to the user needs. First of all, users can select the target words that need to appear in the sentence. In the context of second language learning, these might be the words that a learner must associate in order to expand her vocabulary. For slogan generation, the target words could be the key features of a product, or targetdefining keywords that copywriters want to explicitly mention. On top of that, a user can characterize the generated sentences according to several dimensions, namely: 1) a specific semantic domain, e.g.: “sports” or “blankets”; 2) a specific emotion, e.g., “joy”, “anger” or just “negative”; 3) a specific color, e.g., “red” or “blue”; 4) a combination of phonetic properties of the words that will appear in the sentence, i.e., rhymes, alliterations and plosives. More formally, the user input is a tuple: U = ⟨t, d, c, e, p, w⟩, where t is the set of target words, d is a set of words defining the target domain, c and p are, respectively, the color and the emotion towards which the user wants to slant the sentence, p represents the desired phonetic features, and w is a set of weights that control the influence of each dimension on the generative process, as detailed in Section 3.3. For target and domain words, users can explicitly select one or more POSes to be considered, e.g., “drink/verb” or “drink/verb,noun”. The sentence generation process is based on morpho-syntactic patterns which we automatically discover from a corpus of dependency parsed sentences P. These patterns represent very general skeletons of well-formed sentences that we employ to generate creative sentences by only focusing on the lexical aspects of the process. Candidate fillers for each empty position (slot) in the patterns are chosen according to the lexical and syntactic constraints enforced by the dependency relations in the patterns. These constraints are learned from relation-head-modifier co-occurrence counts estimated from a dependency treebank L. A beam search in the space of all possible lexicalizations of a syntactic pattern promotes the words with the highest likelihood of satisfying the user specification. Algorithm 1 provides a high-level description of the creative sentence generation process. Here, Θ is a set of meta-parameters that affect search complexity and running time of the algorithm, such as the minimum/maximum number of solutions to Algorithm 1 SentenceGeneration(U, Θ, P, L): U is the user specification, Θ is a set of meta-parameters; P and L are two dependency treebanks. O ←∅ for all p ∈CompatiblePatternsΘ(U, P) do while NotEnoughSolutionsΘ(O) do O ←O ∪FillInPatternΘ(U, p, L) return SelectBestSolutionsΘ(O) DT NNS VBD DT JJ NN IN DT NN The * * a * * in the * det nsubj dobj det amod prep pobj det Figure 1: Example of a syntactic pattern. A “*” represents an empty slot to be filled with a filler. be generated, the maximum number of patterns to consider, or the maximum size of the generated sentences. CompatiblePatterns(·) finds the most frequent syntactic patterns in P that are compatible with the user specification, as explained in Section 3.1; FillInPattern(·) carries out the beam search, and returns the best solutions generated for each pattern p given U. The algorithm terminates when at least a minimum number of solutions have been generated, or when all the compatible patterns have been exhausted. Finally, only the best among the generated solutions are shown to the user. More details about the search in the solution space are provided in Section 3.2. 3.1 Pattern selection We generate creative sentences starting from morpho-syntactic patterns which have been automatically learned from a large corpus P. The choice of the corpus from which the patterns are extracted constitutes the first element of the creative sentence generation process, as different choices will generate sentences with different styles. For example, a corpus of slogans or punchlines can result in short, catchy and memorable sentences, whereas a corpus of simplified English would be a better choice to learn a second language or to address low reading level audiences. A pattern is the syntactic skeleton of a class of sentences observed in P. Within a pattern, a second element of creativity involves the selection of original combinations of words (fillers) that do not violate the grammaticality of the sentence. The patterns that we employ are automatic dependency trees from which all content-words have been removed, as exemplified in Figure 1. After selecting the target corpus, we parse all the sentences with the Stanford Parser (Klein and Man1448 ning, 2003) and produce the patterns by stripping away all content words from the parses. Then, for each pattern we count how many times it has been observed in the corpus. Additionally, we keep track of what kind of empty slots, i.e., empty positions, are available in each pattern. For example, the pattern in Figure 1 can accommodate up to two singular nouns (NN), one plural noun (NNS), one adjective (JJ) and one verb in the past tense (VBD). This information is needed to select the patterns which are compatible with the target words t in the user specification U. For example, this pattern is not compatible with t = [heading/VBG, edge/NN] as the pattern does not have an empty slot for a gerundive verb, while it satisfies t = [heading/NN, edge/NN] as it can accommodate the two singular nouns. While retrieving patterns, we also need to enforce that a pattern be not completely filled just by adding the target words t, as under these conditions there would be no room to achieve any kind of creative effect. Therefore, we also require that the patterns retrieved by CompatiblePatterns(·) have more empty slots than the size of t. The minimum and maximum number of excess slots in the pattern are two other meta-parameters controlled by Θ. CompatiblePatterns(·) returns compatible patterns ordered by their frequency, i.e. when generating solutions the first patterns that are explored are the most frequently observed ones. In this way, we achieve the following two objectives: 1) we compensate for the unavoidable errors introduced by the automatic parser, as frequently observed parses are less likely to be the result of an erroneous interpretation of a sentence; and 2) we generate sentences that are most likely to be catchy and memorable, being based on syntactic constructs that are used more frequently. To avoid always selecting the same patterns for the same kinds of inputs, we add a small random component (also controlled by Θ) to the pattern sorting algorithm, thus allowing for sentences to be generated also from non-top ranked patterns. 3.2 Searching the solution space With the compatible patterns selected, we can initiate a beam search in the space of all possible lexicalizations of the patterns, i.e., the space of all sentences that can be generated by respecting the syntactic constraints encoded by each pattern. The process starts with a syntactic pattern p containing only stop words, syntactic relations and morphologic constraints (i.e., part-of-speech DT NNS VBD DT JJ NN IN DT NN The fires X a * smoke in the * det nsubj dobj det amod prep pobj det Figure 2: A partially lexicalized sentence with a highlighted empty slot marked with X. The relevant dependencies to fill in the slot are shown in boldface. tags) for the empty slots. The search advances towards a complete solution by selecting an empty slot to fill and trying to place candidate fillers in the selected position. Each partially lexicalized solution is scored by a battery of scoring functions that compete to generate creative sentences respecting the user specification U, as explained in Section 3.3. The most promising solutions are extended by filling another slot, until completely lexicalized sentences, i.e., sentences without empty slots, are generated. To limit the number of words that can occupy a given position in a sentence, we define a set of operators that return a list of candidate fillers for a slot solely based on syntactic clues. To achieve that, we analyze a large corpus of parsed sentences L3 and store counts of observed head-relationmodifier (⟨h, r, m⟩) dependency relations. Let τr(h) be an operator that, when applied to a head word h in a relation r, returns the set of words in L which have been observed as modifiers for h in r with a specific POS. To simplify the notation, we assume that the relation r also carries along the POS of the head and modifier slots. As an example, with respect to the tree depicted in Figure 2, τamod(smoke) would return all the words with POS equal to “JJ” that have been observed as adjective modifiers for the singular noun “smoke”. We will refer to τr(·) as the dependency operator for r. For every τr(·), we also define an inverse dependency operator τ −1 r (·), which returns the list of the possible heads in r when applied to a modifier word m. For instance, with respect to Figure 2, τ −1 nsubj(fires) would return the set of verbs in the past tense of which “fires” as a plural noun can be a subject. While filling in a given slot X, the dependency operators can be combined to obtain a list of words which are likely to occupy that position given the syntactic constraints induced by the structure of the pattern. Let W = ∪i{wi} be the set of words which are directly connected to the empty slot by 3Distinct from the corpus used for pattern selection, P. 1449 a dependency relation. Each word wi implies a constraint that candidate fillers for X must satisfy. If wi is the head of X, then a direct operator is used to retrieve a list of fillers that satisfy the ith constraint. Conversely, if wi is a modifier of X, an inverse operator is employed. As an example, let us consider the partially completed sentence shown in Figure 2 having an empty slot marked with X. Here, the word “smoke” is a modifier for X, to which it is connected by a dobj relation. Therefore, we can exploit τ −1 dobj(smoke) to obtain a ranked list of words that can occupy X according to this constraint. Similarly, the τ −1 nsubj(fires) operator can be used to retrieve a list of verbs in the past tense that accept “fires” as nsubj modifier. Finally τ −1 prep(in) can further restrict our options to verbs that accepts complements introduced by the preposition “in”. For example, the words “generated”, “produced”, “caused” or “formed” would be good candidates to fill in the slot considering all the previous constraints. More formally, we can define the set of candidate fillers for a slot X, CX, as: CX = τ −1 rhX,X(hX) ∩(T wi|wi∈MX τrwi,X(wi)), where rwi,X is the type of relation between wi and X, MX is the set of modifiers of X and hX is the syntactic head of X.4 Concerning the order in which slots are filled, we start from those that have the highest number of dependencies (both head or modifiers) that have been already instantiated in the sentence, i.e., we start from the slots that are connected to the highest number of non-empty slots. In doing so we maximize the constraints that we can rely on when inserting a new word, and eventually generate more reliable outputs. 3.3 Filler selection and solution scoring We have devised a set of feature functions that account for different aspects of the creative sentence generation process. By changing the weight w of the feature functions in U, users can control the extent to which each creativity component will affect the sentence generation process, and tune the output of the system to better match their needs. As explained in the remainder of this section, feature functions are responsible for ranking the candidate slot fillers to be used during sentence generation and for selecting the best solutions to be 4An empty slot does not generate constraints for X. In addition, there might be cases in which it is not possible to find a filler that satisfies all the constraints at the same time. In such cases, all the fillers that satisfy the maximum number of constraints are considered. Algorithm 2 RankCandidates(U, f, c1, c2, s, X): c1 and c2 are two candidate fillers for the slot X in the sentence s = [s0, . . . sn]; f is the set of feature functions; U is the user specification. sc1 ←s, sc2 ←s, sc1[X] ←c1, sc2[X] ←c2 for all f ∈SortFeatureFunctionsΘ(U, f) do if f(sc1, U) > f(sc2, U) then return c1 ≻c2 else if f(sc1, U) < f(sc2, U) then return c1 ≺c2 return c1 ≡c2 shown to the users. Algorithm 2 details the process of ranking candidate fillers. To compare two candidates c1 and c2 for the slot X in the sentence s, we first generate two sentences sc1 and sc2 in which the empty slot X is occupied by c1 and c2, respectively. Then, we sort the feature functions based on their weights in descending order, and in turn we apply them to score the two sentences. As soon as we find a scorer for which one sentence is better than the other, we can take a decision about the ranking of the fillers. This approach makes it possible to establish a strict order of precedence among feature functions and to select fillers that have a highest chance of maximizing the user satisfaction. Concerning the scoring of partial solutions and complete sentences, we adopt a simple linear combination of scoring functions. Let s be a (partial) sentence, f = [f0, . . . , fk] be the vector of scoring functions and w = [w0, . . . , wk] the associated vector of weights in U. The overall score of s is calculated as score(s, U) = Pk i=0 wifi(s, U) . Solutions that do not contain all the required target words are discarded and not shown to the user. Currently, the model employs the following 12 feature functions: Chromatic and emotional connotation. The chromatic connotation of a sentence s = [s0, . . . , sn] is computed as f(s, U) = P si(sim(si, c) −P cj̸=c sim(si, cj)), where c is the user selected target color and sim(si, cj) is the degree of association between the word si and the color cj as calculated by Mohammad (2011). All the words in the sentence which have an association with the target color c give a positive contribution, while those that are associated with a color ci ̸= c contribute negatively. Emotional connotation works exactly in the same way, but in this case word-emotion associations are taken from (Mohammad and Turney, 2010). Domain relatedness. This feature function uses an LSA (Deerwester et al., 1990) vector space 1450 model to measure the similarity between the words in the sentence and the target domain d specified by the user. It is calculated as: f(s, U) = P di v(di)·P si v(si) ∥P di v(di)∥·∥P si v(si)∥where v(·) returns the representation of a word in the vector space. Semantic cohesion. This feature behaves exactly like domain relatedness, with the only difference that it measures the similarity between the words in the sentence and the target words t. Target-words scorer. This feature function simply counts what fraction of the target words t is present in a partial solution: f(s, U) = (P si|si∈t 1)/|t|. The target word scorer takes care of enforcing the presence of the target words in the sentences. Letting beam search find the best placement for the target words comes at no extra cost and results in a simple and elegant model. Phonetic features (plosives, alliteration and rhyme). All the phonetic features are based on the phonetic representation of English words of the Carnegie Mellon University pronouncing dictionary (Lenzo, 1998). The plosives feature is calculated as the ratio between the number of plosive sounds in a sentence and the overall number of phonemes. For the alliteration scorer, we store the phonetic representation of each word in s in a trie (i.e., prefix tree), and count how many times each node ni of the trie (corresponding to a phoneme) is traversed. Let ci be the value of the counts for ni. The alliteration score is then calculated as f(s, U) = (P i|ci>1 ci)/ P i ci. More simply put, we count how many of the phonetic prefixes of the words in the sentence are repeated, and then we normalize this value by the total number of phonemes in s. The rhyme feature works exactly in the same way, with the only difference that we invert the phonetic representation of each word before adding it to the TRIE. Thus, we give higher scores to sentences in which several words share the same phonetic ending. Variety scorer. This feature function promotes sentences that contain as many different words as possible. It is calculated as the number of distinct words in the sentence over the size of the sentence. Unusual-words scorer. To increase the ability of the model to generate sentences containing nontrivial word associations, we may want to prefer solutions in which relatively uncommon words are employed. Inversely, we may want to lower lexical complexity to generate sentences more appropriate for certain education or reading levels. We define ci as the number of times each word si ∈s is observed in a corpus V. Accordingly, the value of this feature is calculated as: f(s, U) = (1/|s|)(P si 1/ci). N-gram likelihood. This is simply the likelihood of a sentence estimated by an n-gram language model, to enforce the generation of wellformed word sequences. When a solution is not complete, in the computation we include only the sequences of contiguous words (i.e., not interrupted by empty slots) having length greater than or equal to the order of the n-gram model. Dependency likelihood. This feature is related to the dependency operators introduced in Section 3.2 and it enforces sentences in which dependency chains are well formed. We estimate the probability of a modifier word m and its head h to be in the relation r as pr(h, m) = cr(h, m)/(P hi P mi cr(hi, mi)), where cr(·) is the number of times that m depends on h in the dependency treebank L and hi, mi are all the head/modifier pairs observed in L. The dependency-likelihood of a sentence s can then be calculated as f(s, U) = exp(P ⟨h,m,r⟩∈r(s) log pr(h, m)), r(s) being the set of dependency relations in s. 4 Evaluation We evaluated our model on a creative sentence generation task. The objective of the evaluation is twofold: we wanted to demonstrate 1) the effectiveness of our approach for creative sentence generation, in general, and 2) the potential of BRAINSUP to support the brainstorming process behind slogan generation. To this end, the annotation template included one question asking the annotators to rate the quality of the generated sentences as slogans. Five experienced annotators were asked to rate 432 creative sentences according to the following criteria, namely: 1) Catchiness: is the sentence attractive, catchy or memorable? [Yes/No] 2) Humor: is the sentence witty or humorous? [Yes/No]; 3) Relatedness: is the sentence semantically related to the target domain? [Yes/No]; 4) Correctness: is the sentence grammatically correct? [Ungrammatical/Slightly disfluent/Fluent]; 5) Success: could the sentence be a good slogan for the target domain? [As it is/With minor editing/No]. In these last two cases, the annotators 1451 were instructed to select the middle option only in cases where the gap with a correct/successful sentence could be filled just by performing minor editing. The annotation form had no default values, and the annotators did not know how the evaluated sentences were generated, or whether they were the outcome of one or more systems. We started by collecting slogans from an online repository of slogans5. Then, we randomly selected a subset of these slogans and for each of them we generated an input specification U for the system. We used the commercial domain of the advertised product as the target domain d. Two or three content words appearing in each slogan were randomly selected as the target words t. We did so to simulate the brainstorming phase behind the slogan generation process, where copywriters start with a set of relevant keywords to come up with a catchy slogan. In all cases, we set the target emotion to “positive” as we could not establish a generally valid criteria to associate a specific emotion to a product. Concerning chromatic slanting, for target domains having a strong chromatic correlation we allowed the system to slant the generated sentences accordingly. In the other cases, a random color association was selected. In this manner, we produced 10 tuples ⟨t, d, c, e, p⟩. Then, from each tuple we produced 5 complete user specifications by enabling or disabling different feature function combinations6. The four combinations of features are: base: Target-word scorer + N-gram likelihood + Dependency likelihood + Variety scorer + Unusual-words scorer + Semantic cohesion; base+D: all the scorers in base + Domain relatedness; base+D+C: all the scorers in base+D + Chromatic connotation; base+D+E: all the scorers in base+D + Emotional connotation; base+D+P: all the scorers in base+D + Phonetic features. For each of the resulting 50 input configurations, we generated up to 10 creative sentences. As the system could not generate exactly 10 solutions in all the cases, we ended up with a set of 432 items to annotate. The weights of the feature functions were set heuristically, due to the lack of an annotated dataset suitable to learn an opti5http://www.tvacres.com/advertising_ slogans.htm 6An alternative strategy to keep the annotation effort under control would have been to generate fewer sentences from a larger number of inputs. We adopted the former setting since we regarded it as more similar to a brainstorming session, where the system proposes different alternatives to inspire human operators. Forcing BRAINSUP to only output one or two sentences would have limited its ability to explore and suggest potentially valuable outputs. MC Cat. Hum. Corr. Rel. Succ. RND2 RND3 2 16.67 22.22 37.04 3 47.45 39.58 43.52 13.66 44.21 62.50 49.38 4 33.10 37.73 32.18 21.99 22.22 31.25 12.35 5 19.44 22.69 07.64 64.35 11.34 06.25 01.23 Table 2: Majority classes (%) for the five dimensions of the annotation. mal weight configuration. We started by assigning the highest weight to the Target Word scorer (i.e., 1.0), followed by the Variety and Unusual Word scorers (0.99), the Phonetic Features, Chromatic/Emotional Connotation and Semantic Cohesion scorers (0.98) and finally the Domain, Ngram and Dependency Likelihood scorers (0.97). These settings allow us to enforce an order of precedence among the scorers during slot-filling, while giving them virtually equal relevance for solution ranking. As discussed in Section 3 we use two different treebanks to learn the syntactic patterns (P) and the dependency operators (L). For these experiments, patterns were learned from a corpus of 16,000 proverbs (Mihalcea and Strapparava, 2006), which offers a good selection of short sentences with a good potential to be used for slogan generation. This choice seemed to be a good compromise as, to our best knowledge, there is no published slogan dataset with an adequate size. Besides, using existing slogans might have legal implications that we might not be aware of. Dependency operators were learned by dependency parsing the British National Corpus7. To reduce the amount of noise introduced by the automatic parses, we only considered sentences having less than 20 words. Furthermore, we only considered sentences in which all the content words are listed in WordNet (Miller, 1995) with the observed part of speech.8 The LSA space used for the semantic feature functions was also learned on BNC data, but in this case no filtering was applied. 4.1 Results To measure the agreement among the annotators, similarly to Mohammad (2011) and Ozbal and Strapparava (2012) we calculated the majority class for each dimension of the annotation task. A 7http://www.natcorp.ox.ac.uk/ 8Since the CMU pronouncing dictionary used by the phonetic scorers is based on the American pronunciation of words, we actually pre-processed the whole BNC by replacing all British-English words with their American-English counterparts. To this end, we used the mapping available at http://wordlist.sourceforge.net/. 1452 Cat. Rel. Hum. Succ. Corr. Yes 67.59 93.98 12.73 32.41 64.35 Partly 23.15 31.71 No 32.41 06.02 87.27 44.44 03.94 Table 3: Majority decisions (%) for each annotation dimension. majority class greater than or equal to 3 means that the absolute majority of the 5 annotators agreed on the same decision9. Table 2 shows the observed agreement for each dimension. The column labeled RND2 (RND3) shows the random agreement for a given number of annotators and a binary (ternary) decision. For example, all five annotators (MC=5) agreed on the annotation of the catchiness of the slogans in 19.44% of the cases. The random chance of agreement for 5 annotators on the binary decision problem is 6.25%. The figures for MC ≥ 4 are generally high, confirming a good agreement among the annotators. The agreement on the relatedness of the slogans is especially high, with all 5 annotators taking the same decision in almost two cases out of three, i.e., 64.35%. Table 3 lists the distribution of answers for each dimension in the cases where a decision can be taken by majority vote. The generated slogans are found to be catchy in more than 2/3 of the cases, (i.e., 67.59%), completely successful in 1/3 of the cases (32.41%) and completely correct in 2/3 of the cases (64.35%). These figures demonstrate that BRAINSUP is very effective in generating grammatical utterances that have all the appealing properties of a successful slogan. As for humor, the sentences are found to have this property in only 12.73% of cases. Even though the figure is not very high, we should also consider that BRAINSUP is not explicitly trying to generate amusing utterances. Concerning success, we should point out that in 23.15% of the cases the annotators have found that the generated slogans have the potential to be turned into successful ones only with minor editing. This is a very important piece of result, as it corroborates our claim that BRAINSUP can indeed be a valuable tool for copywriting, even when it does not manage to output a perfectly good sentence. Similar conclusions can be drawn concerning the correctness of the output, as in almost one third of the cases the slogans are 9For the binary decisions (i.e., catchiness, relatedness and humor), at least 3 annotators out of 5 must necessarily agree on the same option. only affected by minor disfluencies. The relatedness figure is especially high, as in almost 94% of the cases the majority of annotators found the slogans to be pertinent to the target domain. This result is not surprising, as all the slogans are generated by considering keywords that already exist in real slogans for the same domain. Anyhow, this is exactly the kind of setting in which we expect BRAINSUP to be employed, i.e., to support creative sentence generation starting from a good set of relevant keywords. Nonetheless, it is very encouraging to observe that the generation process does not deteriorate the positive impact of the input keywords. We would also like to mention that in 63 cases (14.58%) the majority of the annotators have labeled the slogans favorably across all 5 dimensions. The examples listed in Table 1 are selected from this set. It is interesting to observe how the word associations established by BRAINSUP can result in pertinent yet unintentional rhetorical devices such as metaphors (“a summer sunshine”), puns (“lash your drama”) and personifications (“lips and eyes want”). Some examples show the effect of the phonetic features, e.g. plosives in “passionate kiss, perfect lips”, alliteration in “the dark drink” and rhyming in “lips and eyes want the kiss”. In some cases, the output of BRAINSUP seems to be governed by mysterious philosophical reasoning, as in the delicate examples generated for “soap”. For comparison, Table 4 lists a selection of the examples that have been labeled as unsuccessful by the majority of raters. In some cases, BRAINSUP is improperly selecting attributes that highlight undesirable properties in the target domain, e.g., “A pleasant tasting, a heady wine”. To avoid similar errors, it would be necessary to reason about the valence of an attribute for a specific domain. In other cases, the N-gram and the Dependency Likelihood features may introduce phrases which are very cohesive but unrelated to the rest of the sentence, e.g., “Unscrupulous doctors smoke armored units”. Many of these errors could be solved by increasing the weight of the Semantic Cohesion and Domain Relatedness scorers. In other cases, such as “A sixth calorie may taste an own good” or “A same sunshine is fewer than a juice of day”, more sophisticated reasoning about syntactic and semantic relations in the output might be necessary in order to enforce the generation of sound and grammatical sentences. We could not find a significant correlation be1453 Domain Keywords BRAINSUP output examples pleasure wine, tasting A pleasant tasting, a heady wine. – A fruity tasting may drink a sparkling wine. healthy day, juice, sunshine Drink juice of your sunshine, and your weight will choose day of you. – A same sunshine is fewer than a juice of day. cigarette doctors, smoke Unscrupulous doctors smoke armored units. – Doctors smoke no arrow. mascara drama, lash The such drama is the lash. soap skin, love, touch The touch of skin is the love of cacophony. – You love an own skin for a first touch. coke calorie, taste, good A sixth calorie may taste an own good. coffee waking, cup You cannot cup hands without waking some fats. Table 4: Unsuccessful BRAINSUP outputs. tween the input variables (e.g., presence or absence of phonetic features or chromatic slanting) and the outcome of the annotation, i.e. the system by and large produces correct, catchy, related and (at least potentially) successful outputs regardless of the specific input configurations. In this respect, it should be noted that we did not carry out any kind of optimization of the feature weights, which might be needed to obtain more heavily characterized sentences. Furthermore, to better appreciate the contribution of the individual features, comparative experiments in which the users evaluate the system before and after triggering a feature function might be necessary. Concerning the correlation among output dimensions, we only observed relatively high Spearman correlation between correctness and relatedness (0.65), and catchiness and success (0.68). 5 Conclusion We have presented BRAINSUP, a novel system for creative sentence generation that allows users to control many aspects of the creativity process, from the presence of specific target words in the output, to the selection of a target domain, and to the injection of phonetic and semantic properties in the generated sentences. BRAINSUP makes heavy use of dependency parsed data and statistics collected from dependency treebanks to ensure the grammaticality of the generated sentences, and to trim the search space while seeking the sentences that maximize the user satisfaction. The system has been designed as a supporting tool for a variety of real-world applications, from advertisement to entertainment and education, where at the very least it can be a valuable support for time-consuming and knowledgeintensive sentence generation needs. To demonstrate this point, we carried out an evaluation on a creative sentence generation benchmark showing that BRAINSUP can effectively produce catchy, memorable and successful sentences that have the potential to inspire the work of copywriters. To our best knowledge, this is the first systematic attempt to build an extensible framework that allows for multi-dimensional creativity while at the same time relying on syntactic constraints to enforce grammaticality. In this regard, our approach is dual with respect to previous work based on lexical substitution, which suffers from limited expressivity and creativity latitude. In addition, by acquiring the lexicon and the sentence structure from two distinct corpora, we can guarantee that the sentences that we generate have never been observed. We believe that our contribution constitutes a valid starting point for other researchers to deal with unexplored dimensions of creativity. As future work, we plan to use machine learning techniques to estimate optimal weights for the feature functions in different use cases. We would also like to consider syntactic clues while reasoning about semantic properties of the sentence, e.g., color and emotion associations, instead on relying solely on lexical semantics. Concerning the extension of the capabilities of BRAINSUP, we want to include common-sense knowledge and reasoning to profit from more sophisticated semantic relations and to inject humor on demand. Further tuning of BRAINSUP to build a dedicated system for slogan generation is also part of our future plans. After these improvements, we would like to conduct a more focused evaluation on slogan generation involving human copywriters and domain experts in an interactive setting. We would like to conclude this paper with a pearl of BRAINSUP’s wisdom: It is wiser to believe in science than in everlasting love. Acknowledgments G¨ozde ¨Ozbal and Carlo Strapparava were partially supported by the PerTe project (Trento RISE). 1454 References Kim Binsted and Graeme Ritchie. 1997. Computational rules for generating punning riddles. Humor International Journal of Humor Research, 10(1):25– 76, January. Andy Borman, Rada Mihalcea, and Paul Tarau. 2005. Pic-net: Pictorial representations for illustrated semantic networks. In Proceedings of the AAAI Spring Symposium on Knowledge Collection from Volunteer Contributors. Simon Colton, Jacob Goodwin, and Tony Veale. 2012. Full-FACE Poetry Generation. In Proceedings of the 3rd International Conference on Computational Creativity, pages 95–102. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal Of The American Society for Information Science, 41(6):391–407. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC’06), pages 417–422. Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In EMNLP, pages 524–533. Marco Guerini, Carlo Strapparava, and Oliviero Stock. 2011. Slanting existing text with valentino. In Proceedings of the 16th international conference on Intelligent user interfaces, IUI ’11, pages 439–440, New York, NY, USA. ACM. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 423– 430, Stroudsburg, PA, USA. Association for Computational Linguistics. Kevin S. LaBar and Roberto Cabeza. 2006. Cognitive neuroscience of emotional memory. Nature reviews. Neuroscience, 7(1):54–64, January. Kevin Lenzo. 1998. The cmu pronouncing dictionary. http://www.speech.cs.cmu.edu/cgi-bin/cmudict. Ruli Manurung, Graeme Ritchie, Helen Pain, Annalu Waller, Dave O’Mara, and Rolf Black. 2008. The Construction of a Pun Generator for Language Skills Development. Applied Artificial Intelligence, 22(9):841–869, October. J McKay. 2002. Generation of idiom-based witticisms to aid second language learning. In Twente Workshop on Language Technology 20, pages 70–74. R. Mihalcea and C. Strapparava. 2006. Learning to laugh (automatically): Computational models for humor recognition. Journal of Computational Intelligence, 22(2):126–142, May. George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38:39–41. Saif M. Mohammad and Peter D. Turney. 2010. Emotions evoked by common words and phrases: using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, CAAGET ’10, pages 26– 34, Stroudsburg, PA, USA. Association for Computational Linguistics. Saif Mohammad. 2011. Even the abstract have color: Consensus in word-colour associations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 368–373, Portland, Oregon, USA, June. Association for Computational Linguistics. G¨ozde ¨Ozbal and Carlo Strapparava. 2011. Automatized Memory Techniques for Vocabulary Acquisition in a Second Language. In Alexander Verbraeck, Markus Helfert, Jos´e Cordeiro, and Boris Shishkov, editors, CSEDU, pages 79–87. SciTePress. Gozde Ozbal and Carlo Strapparava. 2012. A computational approach to the automation of creative naming. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 703–711, Jeju Island, Korea, July. Association for Computational Linguistics. N. Sagarra and M. Alba. 2006. The key is in the keyword: L2 vocabulary learning methods with beginning learners of spanish. The Modern Language Journal, 90(2):228–243. Oliviero Stock and Carlo Strapparava. 2006. Laughing with hahacronym, a computational humor system. In proceedings of the 21st national conference on Artificial intelligence - Volume 2, pages 1675–1678. AAAI Press. J. M. Toivanen, H. Toivonen, A. Valitutti, and O. Gross. 2012. Corpus-based Generation of Content and Form in Poetry. In International Conference on Computational Creativity, pages 175–179. A. Valitutti, C. Strapparava, , and O. Stock. 2009. Graphlaugh: a tool for the interactive generation of humorous puns. In Proceedings of ACII-2009, Third Conference on Affective Computing and Intelligent Interaction, Demo track. 1455
2013
142
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1456–1465, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Grammatical Error Correction Using Integer Linear Programming Yuanbin Wu Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 [email protected] Hwee Tou Ng Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 [email protected] Abstract We propose a joint inference algorithm for grammatical error correction. Different from most previous work where different error types are corrected independently, our proposed inference process considers all possible errors in a unied framework. We use integer linear programming (ILP) to model the inference process, which can easily incorporate both the power of existing error classiers and prior knowledge on grammatical error correction. Experimental results on the Helping Our Own shared task show that our method is competitive with state-of-the-art systems. 1 Introduction Grammatical error correction is an important task of natural language processing (NLP). It has many potential applications and may help millions of people who learn English as a second language (ESL). As a research eld, it faces the challenge of processing ungrammatical language, which is different from other NLP tasks. The task has received much attention in recent years, and was the focus of two shared tasks on grammatical error correction in 2011 and 2012 (Dale and Kilgarriff, 2011; Dale et al., 2012). To detect and correct grammatical errors, two different approaches are typically used — knowledge engineering or machine learning. The rst relies on handcrafting a set of rules. For example, the superlative adjective best is preceded by the article the. In contrast, the machine learning approach formulates the task as a classication problem based on learning from training data. For example, an article classier takes a noun phrase (NP) as input and predicts its article using class labels a/an, the, or ɛ (no article). Both approaches have their advantages and disadvantages. One can readily handcraft a set of rules to incorporate various prior knowledge from grammar books and dictionaries, but rules often have exceptions and it is difcult to build rules for all grammatical errors. On the other hand, the machine learning approach can learn from texts written by ESL learners where grammatical errors have been annotated. However, training data may be noisy and classiers may need prior knowledge to guide their predictions. Another consideration in grammatical error correction is how to deal with multiple errors in an input sentence. Most previous work deals with errors individually: different classiers (or rules) are developed for different types of errors (article classier, preposition classier, etc). Classiers are then deployed independently. An example is a pipeline system, where each classier takes the output of the previous classier as its input and proposes corrections of one error type. One problem of this pipeline approach is that the relations between errors are ignored. For example, assume that an input sentence contains a cats. An article classier may propose to delete a, while a noun number classier may propose to change cats to cat. A pipeline approach will choose one of the two corrections based purely on which error classier is applied rst. Another problem is that when applying a classier, the surrounding words in the context are assumed to be correct, which is not true if grammatical errors appear close to each other in a sentence. In this paper, we formulate grammatical error correction as a task suited for joint inference. Given an input sentence, different types of errors are jointly corrected as follows. For every possible error correction, we assign a score which measures how grammatical the resulting sentence is if the correction is accepted. We then choose a set of corrections which will result in a corrected sentence that is judged to be the most grammatical. The inference problem is solved by integer lin1456 ear programming (ILP). Variables of ILP are indicators of possible grammatical error corrections, the objective function aims to select the best set of corrections, and the constraints help to enforce a valid and grammatical output. Furthermore, ILP not only provides a method to solve the inference problem, but also allows for a natural integration of grammatical constraints into a machine learning approach. We will show that ILP fully utilizes individual error classiers, while prior knowledge on grammatical error correction can be easily expressed using linear constraints. We evaluate our proposed ILP approach on the test data from the Helping Our Own (HOO) 2011 shared task (Dale and Kilgarriff, 2011). Experimental results show that the ILP formulation is competitive with stateof-the-art grammatical error correction systems. The remainder of this paper is organized as follows. Section 2 gives the related work. Section 3 introduces a basic ILP formulation. Sections 4 and 5 improve the basic ILP formulation with more constraints and second order variables, respectively. Section 6 presents the experimental results. Section 7 concludes the paper. 2 Related Work The knowledge engineering approach has been used in early grammatical error correction systems (Murata and Nagao, 1993; Bond et al., 1995; Bond and Ikehara, 1996; Heine, 1998). However, as noted by (Han et al., 2006), rules usually have exceptions, and it is hard to utilize corpus statistics in handcrafted rules. As such, the machine learning approach has become the dominant approach in grammatical error correction. Previous work in the machine learning approach typically formulates the task as a classication problem. Article and preposition errors are the two main research topics (Knight and Chander, 1994; Han et al., 2006; Tetreault and Chodorow, 2008; Dahlmeier and Ng, 2011). Features used in classication include surrounding words, part-of-speech tags, language model scores (Gamon, 2010), and parse tree structures (Tetreault et al., 2010). Learning algorithms used include maximum entropy (Han et al., 2006; Tetreault and Chodorow, 2008), averaged perceptron, na¨ve Bayes (Rozovskaya and Roth, 2011), etc. Besides article and preposition errors, verb form errors also attract some attention recently (Liu et al., 2010; Tajiri et al., 2012). Several research efforts have started to deal with correcting different errors in an integrated manner (Gamon, 2011; Park and Levy, 2011; Dahlmeier and Ng, 2012a). Gamon (2011) uses a high-order sequential labeling model to detect various errors. Park and Levy (2011) models grammatical error correction using a noisy channel model, where a predened generative model produces correct sentences and errors are added through a noise model. The work of (Dahlmeier and Ng, 2012a) is probably the closest to our current work. It uses a beamsearch decoder, which iteratively corrects an input sentence to arrive at the best corrected output. The difference between their work and our ILP approach is that the beam-search decoder returns an approximate solution to the original inference problem, while ILP returns an exact solution to an approximate inference problem. Integer linear programming has been successfully applied to many NLP tasks, such as dependency parsing (Riedel and Clarke, 2006; Martins et al., 2009), semantic role labeling (Punyakanok et al., 2005), and event extraction (Riedel and McCallum, 2011). 3 Inference with First Order Variables The inference problem for grammatical error correction can be stated as follows: “Given an input sentence, choose a set of corrections which results in the best output sentence.” In this paper, this problem will be expressed and solved by integer linear programming (ILP). To express an NLP task in the framework of ILP requires the following steps: 1. Encode the output space of the NLP task using integer variables; 2. Express the inference objective as a linear objective function; and 3. Introduce problem-specic constraints to rene the feasible output space. In the following sections, we follow the above formulation. For the grammatical error correction task, the variables in ILP are indicators of the corrections that a word needs, the objective function measures how grammatical the whole sentence is if some corrections are accepted, and the constraints guarantee that the corrections do not conict with each other. 1457 3.1 First Order Variables Given an input sentence, the main question that a grammatical error correction system needs to answer is: What corrections at which positions? For example, is it reasonable to change the word cats to cat in the sentence A cats sat on the mat? Given the corrections at various positions in a sentence, the system can readily come up with the corrected sentence. Thus, a natural way to encode the output space of grammatical error correction requires information about sentence position, error type (e.g., noun number error), and correction (e.g., cat). Suppose s is an input sentence, and |s| is its length (i.e., the number of words in s). Dene rst order variables: Zk l,p ∈{0, 1}, (1) where p∈{1, 2, . . . , |s|} is a position in a sentence, l∈L is an error type, k∈{1, 2, . . . , C(l)} is a correction of type l. L: the set of error types, C(l): the number of corrections for error type l. If Zk l,p = 1, the word at position p should be corrected to k that is of error type l. Otherwise, the word at position p is not applicable for this correction. Deletion of a word is represented as k = ɛ. For example, Za Art,1 = 1 means that the article (Art) at position 1 of the sentence should be a. If Za Art,1 = 0, then the article should not be a. Table 1 contains the error types handled in this work, their possible corrections and applicable positions in a sentence. 3.2 The Objective Function The objective of the inference problem is to nd the best output sentence. However, there are exponentially many different combinations of corrections, and it is not possible to consider all combinations. Therefore, instead of solving the original inference problem, we will solve an approximate inference problem by introducing the following decomposable assumption: Measuring the output quality of multiple corrections can be decomposed into measuring the quality of the individual corrections. Let s′ be the resulting sentence if the correction Zk l,p is accepted for s, or for simplicity denoting it as s Zk l,p −−→s′. Let wl,p,k ∈R, measure how grammatical s′ is. Dene the objective function as max ∑ l,p,k wl,p,kZk l,p. This linear objective function aims to select a set of Zk l,p, such that the sum of their weights is the largest among all possible candidate corrections, which in turn gives the most grammatical sentence under the decomposable assumption. Although the decomposable assumption is a strong assumption, it performs well in practice, and one can relax the assumption by using higher order variables (see Section 5). For an individual correction Zk l,p, we measure the quality of s′ based on three factors: 1. The language model score h(s′, LM) of s′ based on a large web corpus; 2. The condence scores f(s′, t) of classiers, where t ∈E and E is the set of classiers. For example, an article classier trained on well-written documents will score every article in s′, and measure the quality of s′ from the perspective of an article “expert”. 3. The disagreement scores g(s′, t) of classiers, where t ∈E. A disagreement score measures how ungrammatical s′ is from the perspective of a classier. Take the article classier as an example. For each article instance in s′, the classier computes the difference between the maximum condence score among all possible choices of articles, and the condence score of the observed article. This difference represents the disagreement on the observed article by the article classier or “expert”. Dene the maximum difference over all article instances in s′ to be the article classier disagreement score of s′. In general, this score is large if the sentence s′ is more ungrammatical. The weight wl,p,k is a combination of these scores: wl,p,k = νLMh(s′, LM) + ∑ t∈E λtf(s′, t) + ∑ t∈E µtg(s′, t), (2) where νLM, λt, and µt are the coefcients. 3.3 Constraints An observation on the objective function is that it is possible, for example, to set Za Art,p = 1 and 1458 Type l Correction k C(l) Applicable Variables article a, the, ɛ 3 article or NP Za Art,p, Zthe Art,p,Zɛ Art,p preposition on, at, in, ... |confusion set| preposition Zon Prep,p, Zat Prep,p, Zin Prep,p, ... noun number singular, plural 2 noun Zsingular Noun,p , Zplural Noun,p punctuation punctuation symbols |candidates| determined by rules Zoriginal Punct,p, Zcand1 Punct,p, Zcand2 Punct,p,... spelling correctly spelled |candidates| determined by a Zoriginal Spell,p , Zcand1 Spell,p, Zcand2 Spell,p,... words spell checker Table 1: Error types and corrections. The Applicable column indicates which parts of a sentence are applicable to an error type. In the rst row, ɛ means deleting an article. Zthe Art,p = 1, which means there are two corrections a and the for the same sentence position p, but obviously only one article is allowed. A simple constraint to avoid these conicts is ∑ k Zk l,p = 1, ∀applicable l, p It reads as follows: for each error type l, only one output k is allowed at any applicable position p (note that Zk l,p is a Boolean variable). Putting the variables, objective function, and constraints together, the ILP problem with respect to rst order variables is as follows: max ∑ l,p,k wl,p,kZk l,p (3) s.t. ∑ k Zk l,p = 1, ∀applicable l, p (4) Zk l,p ∈{0, 1} (5) The ILP problem is solved using lp solve1, an integer linear programming solver based on the revised simplex method and the branch-and-bound method for integers. 3.4 An Illustrating Example To illustrate the ILP formulation, consider an example input sentence s: A cats sat on the mat . (6) First, the constraint (4) at position 1 is: Za Art,1 + Zthe Art,1 + Zɛ Art,1 = 1, which means only one article in {a, the, ɛ} is selected. 1http://lpsolve.sourceforge.net/ Next, to compute wl,p,k, we collect language model score and condence scores from the article (ART), preposition (PREP), and noun number (NOUN) classier, i.e., E = {ART, PREP, NOUN}. The weight for Zsingular Noun,2 is: wNoun,2,singular = νLMh(s′, LM)+ λARTf(s′, ART) + λPREPf(s′, PREP) + λNOUNf(s′, NOUN)+ µARTg(s′, ART) + µPREPg(s′, PREP) + µNOUNg(s′, NOUN). where s Zsingular Noun,2 −−−−→s′ = A cat sat on the mat . The condence score f(s′, t) of classier t is the average of the condence scores of t on the applicable instances in s′. For example, there are two article instances in s′, located at position 1 and 5 respectively, hence, f(s′, ART)= 1 2 f(s′[1], 1, ART) + f(s′[5], 5, ART)  = 1 2 f(a, 1, ART) + f(the, 5, ART)  . Here, the symbol ft(s′[p], p, ART) refers to the condence score of the article classier at position p, and s′[p] is the word at position p of s′. Similarly, the disagreement score g(s′, ART) of the article classier is g(s′, ART) = max(g1, g2) g1= arg max k f(k, 1, ART) −f(a, 1, ART) g2= arg max k f(k, 5, ART) −f(the, 5, ART) Putting them together, the weight for Zsingular Noun,2 is: wNoun,2,singular = νLMh(s′, LM) + λART 2 f(a, 1, ART) + f(the, 5, ART)  + λPREPf(on, 4, PREP) + λNOUN 2 f(cat, 2, NOUN) + f(mat, 6, NOUN)  + µARTg(s′, ART) + µPREPg(s′, PREP) + µNOUNg(s′, NOUN) 1459 Input A cats sat on the mat Corrections The, ɛ cat at, in a, ɛ mats Za Art,1 Zsingular Noun,2 Zon Prep,4 Za Art,5 Zsingular Noun,6 Variables Zthe Art,1 Zplural Noun,2 Zat Prep,4 Zthe Art,5 Zplural Noun,6 Zɛ Art,1 Zin Prep,4 Zɛ Art,5 Table 2: The possible corrections on example (6). 3.5 Complexity The time complexity of ILP is determined by the number of variables and constraints. Assume that for each sentence position, at most K classiers are applicable2. The number of variables is O(K|s|C(l∗)), where l∗= arg maxl∈LC(l). The number of constraints is O(K|s|). 4 Constraints for Prior Knowledge 4.1 Modication Count Constraints In practice, we usually have some rough gauge of the quality of an input sentence. If an input sentence is mostly grammatical, the system is expected to make few corrections. This requirement can be easily satised by adding modication count constraints. In this work, we constrain the number of modications according to error types. For the error type l, a parameter Nl controls the number of modications allowed for type l. For example, the modication count constraint for article corrections is ∑ p,k Zk Art,p ≤NArt, where k ̸= s[p]. (7) The condition ensures that the correction k is different from the original word in the input sentence. Hence, the summation only counts real modications. There are similar constraints for preposition, noun number, and spelling corrections: ∑ p,k Zk Prep,p≤NPrep, where k ̸= s[p], (8) ∑ p,k Zk Noun,p≤NNoun, where k ̸= s[p], (9) ∑ p,k Zk Spell,p≤NSpell, where k ̸= s[p]. (10) 2In most cases, K = 1. An example of K > 1 is a noun that requires changing the word form (between singular and plural) and inserting an article, for which K = 2. 4.2 Article-Noun Agreement Constraints An advantage of the ILP formulation is that it is relatively easy to incorporate prior linguistic knowledge. We now take article-noun agreement as an example to illustrate how to encode such prior knowledge using linear constraints. A noun in plural form cannot have a (or an) as its article. That two Boolean variables Z1 and Z2 are mutually exclusive can be handled using a simple inequality Z1 + Z2 ≤1. Thus, the following inequality correctly enforces article-noun agreement: Za Art,p1 + Zplural Noun,p2 ≤1, (11) where the article at p1 modies the noun at p2. 4.3 Dependency Relation Constraints Another set of constraints involves dependency relations, including subject-verb relation and determiner-noun relation. Specically, for a noun n at position p, we check the word w related to n via a child-parent or parent-child relation. If w belongs to a set of verbs or determiners (are, were, these, all) that takes a plural noun, then the noun n is required to be in plural form by adding the following constraint: Zplural Noun,p = 1. (12) Similarly, if a noun n at position p is required to be in singular form due to subject-verb relation or determiner-noun relation, we add the following constraint: Zsingular Noun,p = 1. (13) 5 Inference with Second Order Variables 5.1 Motivation and Denition To relax the decomposable assumption in Section 3.2, instead of treating each correction separately, one can combine multiple corrections into a single correction by introducing higher order variables. 1460 Consider the sentence A cat sat on the mat. When measuring the gain due to Zplural Noun,2 = 1 (change cat to cats), the weight wNoun,2,plural is likely to be small since A cats will get a low language model score, a low article classier condence score, and a low noun number classier condence score. Similarly, the weight wArt,1,ɛ of Zɛ Art,1 (delete article A) is also likely to be small because of the missing article. Thus, if one considers the two corrections separately, they are both unlikely to appear in the nal corrected output. However, the correction from A cat sat on the mat. to Cats sat on the mat. should be a reasonable candidate, especially if the context indicates that there are many cats (more than one) on the mat. Due to treating corrections separately, it is difcult to deal with multiple interacting corrections with only rst order variables. In order to include the correction ɛ Cats, one can use a new set of variables, second order variables. To keep symbols clear, let Z = {Zu|Zu = Zk l,p, ∀l, p, k} be the set of rst order variables, and wu = wl,p,k be the weight of Zu = Zk l,p. Dene a second order variable Xu,v: Xu,v = Zu ∧Zv, (14) where Zu and Zv are rst order variables: Zu ≜Zk1 l1,p1, Zv ≜Zk2 l2,p2. (15) The denition of Xu,v states that a second order variable is set to 1 if and only if its two component rst order variables are both set to 1. Thus, it combines two corrections into a single correction. In the above example, a second order variable is introduced: Xu,v = Zɛ Art,1 ∧Zplural Noun,2, s Xu,v −−−→s′ = Cats sat on the mat . Similar to rst order variables, let wu,v be the weight of Xu,v. Note that denition (2) only depends on the output sentence s′, and the weight of the second order variable wu,v can be dened in the same way: wu,v = νLMh(s′, LM) + ∑ t∈E λtf(s′, t) + ∑ t∈E µtg(s′, t). (16) 5.2 ILP with Second Order Variables A set of new constraints is needed to enforce consistency between the rst and second order variables. These constraints are the linearization of denition (14) of Xu,v: Xu,v = Zu ∧Zv ⇔ Xu,v ≤ Zu Xu,v ≤ Zv Xu,v ≥ Zu + Zv −1 (17) A new objective function combines the weights from both rst and second order variables: max ∑ l,p,k wl,p,kZk l,p + ∑ u,v wu,vXu,v. (18) In our experiments, due to noisy data, some weights of second order variables are small, even if both of its rst order variables have large weights and satisfy all prior knowledge constraints. They will affect ILP proposing good corrections. We nd that the performance will be better if we change the weights of second order variables to w′ u,v, where w′ u,v ≜max{wu,v, wu, wv}. (19) Putting them together, (20)-(25) is an ILP formulation using second order variables, where X is the set of all second order variables which will be explained in the next subsection. max ∑ l,p,k wl,p,kZk l,p + ∑ u,v w′ u,vXu,v (20) s.t. ∑ k Zk l,p = 1, ∀applicable l, p (21) Xu,v ≤Zu, (22) Xu,v ≤Zv, (23) Xu,v ≥Zu + Zv −1, ∀Xu,v ∈X (24) Xu,v, Zk l,p ∈{0, 1} (25) 5.3 Complexity and Variable Selection Using the notation in section 3.5, the number of second order variables is O(|Z|2) = O(K2|s|2C(l∗)2) and the number of constraints is O(K2|s|2C(l∗)2). More generally, for variables with higher order h ≥2, the number of variables (and constraints) is O(Kh|s|hC(l∗)h). Note that both the number of variables and the number of constraints increase exponentially with increasing variable order. In practice, a small subset of second order variables is sufcient to 1461 Data set Sentences Words Edits Dev set 939 22,808 1,264 Test set 722 18,790 1,057 Table 3: Overview of the HOO 2011 data sets. Corrections are called edits in the HOO 2011 shared task. achieve good performance. For example, noun number corrections are only coupled with nearby article corrections, and have no connection with distant or other types of corrections. In this work, we only introduce second order variables that combine article corrections and noun number corrections. Furthermore, we require that the article and the noun be in the same noun phrase. The set X of second order variables in Equation (24) is dened as follows: X ={Xu,v = Zu ∧Zv|l1 = Art, l2 = Noun, s[p1], s[p2] are in the same noun phrase}, where l1, l2, p1, p2 are taken from Equation (15). 6 Experiments Our experiments mainly focus on two aspects: how our ILP approach performs compared to other grammatical error correction systems; and how the different constraints and the second order variables affect the ILP performance. 6.1 Evaluation Corpus and Metric We follow the evaluation setup in the HOO 2011 shared task on grammatical error correction (Dale and Kilgarriff, 2011). The development set and test set in the shared task consist of conference and workshop papers taken from the Association for Computational Linguistics (ACL). Table 3 gives an overview of the data sets. System performance is measured by precision, recall, and F measure: P = # true edits # system edits, R = # true edits # gold edits, F = 2PR P + R. (26) The difculty lies in how to generate the system edits from the system output. In the HOO 2011 shared task, participants can submit system edits directly or the corrected plain-text system output. In the latter case, the ofcial HOO scorer will extract system edits based on the original (ungrammatical) input text and the corrected system output text, using GNU Wdiff3. Consider an input sentence The data is similar with test set. taken from (Dahlmeier and Ng, 2012a). The gold-standard edits are with →to and ɛ →the. That is, the grammatically correct sentence should be The data is similar to the test set. Suppose the corrected output of a system to be evaluated is exactly this perfectly corrected sentence The data is similar to the test set. However, the ofcial HOO scorer using GNU Wdiff will automatically extract only one system edit with →to the for this system output. Since this single system edit does not match any of the two gold-standard edits, the HOO scorer returns an F measure of 0, even though the system output is perfectly correct. In order to overcome this problem, the MaxMatch (M 2) scorer was proposed in (Dahlmeier and Ng, 2012b). Given a set of gold-standard edits, the original (ungrammatical) input text, and the corrected system output text, the M2 scorer searches for the system edits that have the largest overlap with the gold-standard edits. For the above example, the system edits automatically determined by the M2 scorer are identical to the goldstandard edits, resulting in an F measure of 1 as we would expect. We will use the M2 scorer in this paper to determine the best system edits. Once the system edits are found, P, R, and F are computed using the standard denition (26). 6.2 ILP Conguration 6.2.1 Variables The rst order variables are given in Table 1. If the indenite article correction a is chosen, then the nal choice between a and an is decided by a rule-based post-processing step. For each preposition error variable Zk Prep,p, the correction k is restricted to a pre-dened confusion set of prepositions which depends on the observed preposition at position p. For example, the confusion set of on is { at, for, in, of }. The list of prepositions corrected by our system is about, among, at, by, for, in, into, of, on, over, to, under, with, and within. Only selected positions in a sentence (determined by rules) undergo punctuation correction. The spelling correction candidates are given by a spell checker. We used GNU Aspell4 in our work. 3http://www.gnu.org/software/wdiff/ 4http://aspell.net 1462 6.2.2 Weights As described in Section 3.2, the weight of each variable is a linear combination of the language model score, three classier condence scores, and three classier disagreement scores. We use the Web 1T 5-gram corpus (Brants and Franz, 2006) to compute the language model score for a sentence. Each of the three classiers (article, preposition, and noun number) is trained with the multi-class condence weighted algorithm (Crammer et al., 2009). The training data consists of all non-OCR papers in the ACL Anthology5, minus the documents that overlap with the HOO 2011 data set. The features used for the classiers follow those in (Dahlmeier and Ng, 2012a), which include lexical and part-of-speech n-grams, lexical head words, web-scale n-gram counts, dependency heads and children, etc. Over 5 million training examples are extracted from the ACL Anthology for use as training data for the article and noun number classiers, and over 1 million training examples for the preposition classier. Finally, the language model score, classier condence scores, and classier disagreement scores are normalized to take values in [0, 1], based on the HOO 2011 development data. We use the following values for the coefcients: νLM = 1 (language model); λt = 1 (classier condence); and µt = −1 (classier disagreement). 6.2.3 Constraints In Section 4, three sets of constraints are introduced: modication count (MC), article-noun agreement (ANA), and dependency relation (DR) constraints. The values for the modication count parameters are set as follows: NArt = 3, NPrep = 2, NNoun = 2, and NSpell = 1. 6.3 Experimental Results We compare our ILP approach with two other systems: the beam search decoder of (Dahlmeier and Ng, 2012a) which achieves the best published performance to date on the HOO 2011 data set, and UI Run1 (Rozovskaya et al., 2011) which achieves the best performance among all participating systems at the HOO 2011 shared task. The results are given in Table 4. The HOO 2011 shared task provides two sets of gold-standard edits: the original gold-standard edits produced by the annotator, and the ofcial gold5http://aclweb.org/anthology-new/ System Original Ofcial P R F P R F UI Run1 40.86 11.21 17.59 54.61 14.57 23.00 Beam search 30.28 19.17 23.48 33.59 20.53 25.48 ILP 20.54 27.93 23.67 21.99 29.04 25.03 Table 4: Comparison of three grammatical error correction systems. standard edits which incorporated corrections proposed by the HOO 2011 shared task participants. All three systems listed in Table 4 use the M2 scorer to extract system edits. The results of the beam search decoder and UI Run1 are taken from Table 2 of (Dahlmeier and Ng, 2012a). Overall, ILP inference outperforms UI Run1 on both the original and ofcial gold-standard edits, and the improvements are statistically signicant at the level of signicance 0.01. The performance of ILP inference is also competitive with the beam search decoder. The results indicate that a grammatical error correction system benets from corrections made at a whole sentence level, and that joint correction of multiple error types achieves state-of-the-art performance. Table 5 provides the comparison of the beam search decoder and ILP inference in detail. The main difference between the two is that, except for spelling errors, ILP inference gives higher recall than the beam search decoder, while its precision is lower. This indicates that ILP inference is more aggressive in proposing corrections. Next, we evaluate ILP inference in different congurations. We only focus on article and noun number error types. Table 6 shows the performance of ILP in different congurations. From the results, MC and DR constraints improve precision, indicating that the two constraints can help to restrict the number of erroneous corrections. Including second order variables gives the best F measure, which supports our motivation for introducing higher order variables. Adding article-noun agreement constraints (ANA) slightly decreases performance. By examining the output, we nd that although the overall performance worsens slightly, the agreement requirement is satised. For example, for the input We utilize search engine to ... , the output without ANA is We utilize a search engines to ... but with ANA is We utilize the search engines to ... , while the only gold edit inserts a. 1463 Original Ofcial Error type Beam search ILP Beam search ILP P R F P R F P R F P R F Spelling 36.84 0.69 1.35 60.00 0.59 1.17 36.84 0.66 1.30 60.00 0.57 1.12 + Article 19.84 12.59 15.40 18.54 14.75 16.43 22.45 13.72 17.03 20.37 15.61 17.68 + Preposition 22.62 14.26 17.49 17.61 18.58 18.09 24.84 15.14 18.81 19.24 19.68 19.46 + Punctuation 24.27 18.09 20.73 20.52 23.50 21.91 27.13 19.58 22.75 22.49 24.98 23.67 + Noun number 30.28 19.17 23.48 20.54 27.93 23.67 33.59 20.53 25.48 21.99 29.04 25.03 Table 5: Comparison of the beam search decoder and ILP inference. ILP is equipped with all constraints (MC, ANA, DR) and default parameters. Second order variables related to article and noun number error types are also used in the last row. Setting Original Ofcial P R F P R F Art+Nn, 1st ord. 17.19 19.37 18.22 18.59 20.44 19.47 + MC 17.87 18.49 18.17 19.23 19.39 19.31 + ANA 17.78 18.39 18.08 19.04 19.11 19.07 + DR 17.95 18.58 18.26 19.23 19.30 19.26 + 2nd ord. 18.75 18.88 18.81 20.04 19.58 19.81 Table 6: The effects of different constraints and second order variables. 7 Conclusion In this paper, we model grammatical error correction as a joint inference problem. The inference problem is solved using integer linear programming. We provide three sets of constraints to incorporate additional linguistic knowledge, and introduce a further extension with second order variables. Experiments on the HOO 2011 shared task show that ILP inference achieves state-of-the-art performance on grammatical error correction. Acknowledgments This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Ofce. References Francis Bond and Satoru Ikehara. 1996. When and how to disambiguate? countability in machine translation. In Proceedings of the International Seminar on Multimodal Interactive Disambiguation. Francis Bond, Kentaro Ogura, and Tsukasa Kawaoka. 1995. Noun phrase reference in Japanese-to-English machine translation. In Proceedings of the 6th International Conference on Theoretical and Methodological Issues in Machine Translation. Thorsten Brants and Alex Franz. 2006. Web 1T 5gram corpus version 1.1. Technical report, Google Research. Koby Crammer, Mark Dredze, and Alex Kulesza. 2009. Multi-class condence weighted algorithms. In Proceedings of EMNLP. Daniel Dahlmeier and Hwee Tou Ng. 2011. Grammatical error correction with alternating structure optimization. In Proceedings of ACL. Daniel Dahlmeier and Hwee Tou Ng. 2012a. A beamsearch decoder for grammatical error correction. In Proceedings of EMNLP. Daniel Dahlmeier and Hwee Tou Ng. 2012b. Better evaluation for grammatical error correction. In Proceedings of NAACL. Robert Dale and Adam Kilgarriff. 2011. Helping Our Own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation. Robert Dale, Ilya Anisimoff, and George Narroway. 2012. HOO 2012: A report on the preposition and determiner error correction shared task. In Proceedings of the Seventh Workshop on Innovative Use of NLP for Building Educational Applications, pages 54–62. Michael Gamon. 2010. Using mostly native data to correct errors in learners' writing. In Proceedings of NAACL. 1464 Michael Gamon. 2011. High-order sequence modeling for language learner error detection. In Proceedings of the Sixth Workshop on Innovative Use of NLP for Building Educational Applications. Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2006. Detecting errors in English article usage by non-native speakers. Natural Language Engineering, 12(2). Julia Heine. 1998. Deniteness predictions for Japanese noun phrases. In Proceedings of ACLCOLING. Kevin Knight and Ishwar Chander. 1994. Automated postediting of documents. In Proceedings of AAAI. Xiaohua Liu, Bo Han, Kuan Li, Stephan Hyeonjun Stiller, and Ming Zhou. 2010. SRL-based verb selection for ESL. In Proceedings of EMNLP. Andre Martins, Noah Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of ACLIJCNLP. Masaki Murata and Makoto Nagao. 1993. Determination of referential property and number of nouns in Japanese sentences for machine translation into English. In Proceedings of the 5th International Conference on Theoretical and Methodological Issues in Machine Translation. Y. Albert Park and Roger Levy. 2011. Automated whole sentence grammar correction using a noisy channel model. In Proceedings of ACL. Vasin Punyakanok, Dan Roth, Wen tau Yih, and Dav Zimak. 2005. Learning and inference over constrained output. In Proceedings of IJCAI. Sebastian Riedel and James Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proceedings of EMNLP. Sebastian Riedel and Andrew McCallum. 2011. Fast and robust joint models for biomedical event extraction. In Proceedings of EMNLP. Alla Rozovskaya and Dan Roth. 2011. Algorithm selection and model adaptation for ESL correction tasks. In Proceedings of ACL. Alla Rozovskaya, Mark Sammons, Joshua Gioja, and Dan Roth. 2011. University of Illinois system in HOO text correction shared task. In Proceedings of the 13th European Workshop on Natural Language Generation. Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and aspect error correction for ESL learners using global context. In Proceedings of ACL. Joel R. Tetreault and Martin Chodorow. 2008. The ups and downs of preposition error detection in ESL writing. In Proceedings of COLING. Joel Tetreault, Jennifer Foster, and Martin Chodorow. 2010. Using parse features for preposition selection and error detection. In Proceedings of ACL. 1465
2013
143
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1466–1476, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Text-Driven Toponym Resolution using Indirect Supervision Michael Speriosu Jason Baldridge Department of Linguistics University of Texas at Austin Austin, TX 78712 USA {speriosu,jbaldrid}@utexas.edu Abstract Toponym resolvers identify the specific locations referred to by ambiguous placenames in text. Most resolvers are based on heuristics using spatial relationships between multiple toponyms in a document, or metadata such as population. This paper shows that text-driven disambiguation for toponyms is far more effective. We exploit document-level geotags to indirectly generate training instances for text classifiers for toponym resolution, and show that textual cues can be straightforwardly integrated with other commonly used ones. Results are given for both 19th century texts pertaining to the American Civil War and 20th century newswire articles. 1 Introduction It has been estimated that at least half of the world’s stored knowledge, both printed and digital, has geographic relevance, and that geographic information pervades many more aspects of humanity than previously thought (Petras, 2004; Skupin and Esperb´e, 2011). Thus, there is value in connecting linguistic references to places (e.g. placenames) to formal references to places (coordinates) (Hill, 2006). Allowing for the querying and exploration of knowledge in a geographically informed way requires more powerful tools than a keyword-based search can provide, in part due to the ambiguity of toponyms (placenames). Toponym resolution is the task of disambiguating toponyms in natural language contexts to geographic locations (Leidner, 2008). It plays an essential role in automated geographic indexing and information retrieval. This is useful for historical research that combines age-old geographic issues like territoriality with modern computational tools (Guldi, 2009), studies of the effect of historically recorded travel costs on the shaping of empires (Scheidel et al., 2012), and systems that convey the geographic content in news articles (Teitler et al., 2008; Sankaranarayanan et al., 2009) and microblogs (Gelernter and Mushegian, 2011). Entity disambiguation systems such as those of Kulkarni et al. (2009) and Hoffart et al. (2011) disambiguate references to people and organizations as well as locations, but these systems do not take into account any features or measures unique to geography such as physical distance. Here we demonstrate the utility of incorporating distance measurements in toponym resolution systems. Most work on toponym resolution relies on heuristics and hand-built rules. Some use simple rules based on information from a gazetteer, such as population or administrative level (city, state, country, etc.), resolving every instance of the same toponym type to the same location regardless of context (Ladra et al., 2008). Others use relationships between multiple toponyms in a context (local or whole document) and look for containment relationships, e.g. London and England occurring in the same paragraph or as the bigram London, England (Li et al., 2003; Amitay et al., 2004; Zong et al., 2005; Clough, 2005; Li, 2007; Volz et al., 2007; Jones et al., 2008; Buscaldi and Rosso, 2008; Grover et al., 2010). Still others first identify unambiguous toponyms and then disambiguate other toponyms based on geopolitical relationships with or distances to the unambiguous ones (Ding et al., 2000). Many favor resolutions of toponyms within a local context or document that cover a smaller geographic area over those that are more dispersed (Rauch et al., 2003; Leidner, 2008; Grover et al., 2010; Loureiro et al., 2011; Zhang et al., 2012). Roberts et al. (2010) use relationships learned between people, organizations, and locations from Wikipedia to aid in toponym resolution when such named entities are present, but do not exploit any other textual context. 1466 Most of these approaches suffer from a major weakness: they rely primarily on spatial relationships and metadata about locations (e.g., population). As such, they often require nearby toponyms (including unambiguous or containing toponyms) to resolve ambiguous ones. This reliance can result in poor coverage when the required information is missing in the context or when a document mentions locations that are neither nearby geographically nor in a geopolitical relationship. There is a clear opportunity that most ignore: use non-toponym textual context. Spatially relevant words like downtown that are not explicit toponyms can be strong cues for resolution (Hollenstein and Purves, 2012). Furthermore, the connection between non-spatial words and locations has been successfully exploited in data-driven approaches to document geolocation (Eisenstein et al., 2010, 2011; Wing and Baldridge, 2011; Roller et al., 2012) and other tasks (Hao et al., 2010; Pang et al., 2011; Intagorn and Lerman, 2012; Hecht et al., 2012; Louwerse and Benesh, 2012; Adams and McKenzie, 2013). In this paper, we learn resolvers that use all words in local or document context. For example, the word lobster appearing near the toponym Portland indicates the location is Portland in Maine rather than Oregon or Michigan. Essentially, we learn a text classifier per toponym. There are no massive collections of toponyms labeled with locations, so we train models indirectly using geotagged Wikipedia articles. Our results show these text classifiers are far more accurate than algorithms based on spatial proximity or metadata. Furthermore, they are straightforward to combine with such algorithms and lead to error reductions for documents that match those algorithms’ assumptions. Our primary focus is toponym resolution, so we evaluate on toponyms identified by human annotators. However, it is important to consider the utility of an end-to-end toponym identification and resolution system, so we also demonstrate that performance is still strong when toponyms are detected with a standard named entity recognizer. We have implemented all the models discussed in this paper in an open source software package called Fieldspring, which is available on GitHub: http://github.com/utcompling/fieldspring Explicit instructions are provided for preparing data and running code to reproduce our results. Figure 1: Points representing the United States. 2 Data 2.1 Gazetteer Toponym resolvers need a gazetteer to obtain candidate locations for each toponym. Additionally, many gazetteers include other information such as population and geopolitical hierarchy information. We use GEONAMES, a freely available gazetteer containing over eight million entries worldwide.1 Each location entry contains a name (sometimes more than one) and latitude/longitude coordinates. Entries also include the location’s administrative level (e.g. city or state) and its position in the geopolitical hierarchy of countries, states, etc. GEONAMES gives the locations of regional items like states, provinces, and countries as single points. This is clearly problematic when we seek connections between words and locations: e.g. we might learn that many words associated with the USA are connected to a point in Kansas. To get around this, we represent regional locations as a set of points derived from the gazetteer. Since regional locations are named in the entries for locations they contain, all locations contained in the region are extracted (in some cases over 100,000 of them) and then k-means is run to find a smaller set of spatial centroids. These act as a tractable proxy for the spatial extent of the entire region. k is set to the number of 1◦by 1◦grid cells covered by that region. Figure 1 shows the points computed for the United States.2 A nice property of this representation is that it does not involve region shape files and the additional programming infrastructure they require. 1Downloaded April 16, 2013 from www.geonames. org. 2The representation also contains three points each in Hawaii and Alaska not shown in Figure 1. 1467 Corpus docs toks types tokstop typestop ambavg ambmax TRC-DEV 631 136k 17k 4356 613 15.0 857 TRC-DEV-NER 3165 391 18.2 857 TRC-TEST 315 68k 11k 1903 440 13.7 857 TRC-TEST-NER 1346 305 15.7 857 CWAR-DEV 228 33m 200k 157k 850 29.9 231 CWAR-TEST 113 25m 305k 85k 760 31.5 231 Table 1: Statistics of the corpora used for evaluation. Columns subscripted by top give figures for toponyms. The last two columns give the average number of candidate locations per toponym token and the number of candidate locations for the most ambiguous toponym. A location for present purposes is thus a set of points on the earth’s surface. The distance between two locations is computed as the great circle distance between the closest pair of representative points, one from each location. 2.2 Toponym Resolution Corpora We need corpora with toponyms identified and resolved by human annotators for evaluation. The TR-CONLL corpus (Leidner, 2008) contains 946 REUTERS news articles published in August 1996. It has about 204,000 words and articles range in length from a few hundred words to several thousand words. Each toponym in the corpus was identified and resolved by hand.3 We place every third article into a test portion (TRC-TEST) and the rest in a development portion. Since our methods do not learn from explicitly labeled toponyms, we do not need a training set. The Perseus Civil War and 19th Century American Collection (CWAR) contains 341 books (58 million words) written primarily about and during the American Civil War (Crane, 2000). Toponyms were annotated by a semi-automated process: a named entity recognizer identified toponyms, and then coordinates were assigned using simple rules and corrected by hand. We divide CWAR into development (CWAR-DEV) and test (CWAR-TEST) sets in the same way as TR-CONLL. Table 1 gives statistics for both corpora, including the number and ambiguity of gold standard toponyms for both as well as NER identified to3We found several systematic types of errors in the original TR-CONLL corpus, such as coordinates being swapped for some locations and some longitudes being zero or the negative of their correct values. We repaired many of these errors, though some more idiosyncratic mistakes remain. We, along with Jochen Leidner, will release this updated version shortly and will link to it from our Fieldspring GitHub page. ponyms for TR-CONLL.4 We use the pre-trained English NER from the OpenNLP project.5 2.3 Geolocated Wikipedia Corpus The GEOWIKI dataset contains over one million English articles from the February 11, 2012 dump of Wikipedia. Each article has human-annotated latitude/longitude coordinates. We divide the corpus into training (80%), development (10%), and test (10%) at random and perform preprocessing to remove markup in the same manner as Wing and Baldridge (2011). The training portion is used here to learn models for text-driven resolvers. 3 Toponym Resolvers Given a set of toponyms provided via annotations or identified using NER, a resolver must select a candidate location for each toponym (or, in some cases, a resolver may abstain). Here, we describe baseline resolvers, a heuristic resolver based on the usual cues used in most toponym resolvers, and several text-driven resolvers. We also discuss combining heuristic and text-driven resolvers. 3.1 Baseline Resolvers RANDOM For each toponym, the RANDOM resolver randomly selects a location from those associated in the gazetteer with that toponym. POPULATION The POPULATION resolver selects the location with the greatest population for each toponym. It is generally quite effective, but when a toponym has several locations with large populations, it is often wrong. Also, it can only be used when such information is available, and it is 4States and countries are not annotated in CWAR, so we do not evaluate end-to-end using NER plus toponym resolution for it as there are many (falsely) false positives. 5opennlp.apache.org 1468 less effective if the population statistics are from a time period different from that of the corpus. 3.2 SPIDER Leidner (2008) describes two general and useful minimality properties of toponyms: • one sense per discourse: multiple tokens of a toponym in the same text generally do not refer to different locations in the same text • spatial minimality: different toponyms in a text tend refer to spatially near locations Many toponym resolvers exploit these (Smith and Crane, 2001; Rauch et al., 2003; Leidner, 2008; Grover et al., 2010; Loureiro et al., 2011; Zhang et al., 2012). Here, we define SPIDER (Spatial Prominence via Iterative Distance Evaluation and Reweighting) as a strong representative of such textually unaware approaches. In addition to capturing both minimality properties, it also identifies the relative prominence of the locations for each toponym in a given corpus. SPIDER resolves each toponym by finding the location for each that minimizes the sum distance to all locations for all other toponyms in the same document. On the first iteration, it tends to select locations that clump spatially: if Paris occurs with Dallas, it will choose Paris, Texas even though the topic may be a flight from Texas to France. Further iterations bring Paris, France into focus by capturing its prominence across the corpus. The key intuition is that most documents will discuss Paris, France and only a small portion of these mention places close to Paris, Texas; thus, Paris, France will be selected on the first iteration for many documents (though not for the Dallas document). SPIDER thus assigns each candidate location a weight (initialized to 1.0), which is re-estimated on each iteration. The adjusted distance between two locations is computed as the great circle distance divided by the product of the two locations’ weights. At the end of an iteration, each candidate location’s weight is updated to be the fraction of the times it was chosen times the number of candidates for that toponym. The weights are global, with one for each location in the gazetteer, so the same weight vector is used for each token of a given toponym on a given iteration. For example, if after the first iteration Paris, France is chosen thrice, Paris, Texas once, and Paris, Arkansas never, the global weights of these locations are (3/4)∗3=2.25, (1/4)∗3=.75, and (0/4)∗3=0, respectively (assume, for the example, there are no other locations named Paris). The sum of the weights remains equal to the number of candidate locations. The updated weights are used on the next iteration, so Paris, France will seem “closer” since any distance computed to it is divided by a number greater than one. Paris, Texas will seem somewhat further away, and Paris, Arkansas infinitely far away. The algorithm continues for a fixed number of iterations or until the weights do not change more than some threshold. Here, we run SPIDER for 10 iterations; the weights have generally converged by this point. When only one toponym is present in a document, we simply select the candidate with the greatest weight. When there is no such weight information, such as when the toponym does not cooccur with other toponyms anywhere in the corpus, we select a candidate at random. SPIDER captures prominence, but we stress it is not our main innovation: its purpose is to be a benchmark for text-driven resolvers to beat. 3.3 Text-Driven Resolvers The text-driven resolvers presented in this section all use local context windows, document context, or both, to inform disambiguation. TRIPDL We use a document geolocator trained on GEOWIKI’s document location labels. Others—such as Smith and Crane (2001)—have estimated a document-level location to inform toponym resolution, but ours is the first we are aware of to use training data from a different domain to build a document geolocator that uses all words (not only toponyms) to estimate a document’s location. We use the document geolocation method of Wing and Baldridge (2011). It discretizes the earth’s surface into 1◦by 1◦grid cells and assigns Kullback-Liebler divergences to each cell given a document, based on language models learned for each cell from geolocated Wikipedia articles. We obtain the probability of a cell c given a document d by the standard method of exponentiating the negative KL-divergence and normalizing these values over all cells: P(c|d) = exp(−KL(c, d)) P c′ exp(−KL(c′, d)) This distribution is used for all toponyms t in d to define distributions PDL(l|t, d) over candidate 1469 locations of t in document d to be the portion of P(c|d) consistent with the t’s candidate locations: PDL(l|t, d) = P(cl|d) P l′∈G(t) P(cl′|d) where G(t) is the set of the locations for t in the gazetteer, and cl is the cell containing l. TRIPDL (Toponym Resolution Informed by Predicted Document Locations) chooses the location that maximizes PDL. WISTR While TRIPDL uses an off-the-shelf document geolocator to capture the geographic gist of a document, WISTR (Wikipedia Indirectly Supervised Toponym Resolver) instead directly targets each toponym. It learns text classifiers based on local context window features trained on instances automatically extracted from GEOWIKI. To create the indirectly supervised training data for WISTR, the OpenNLP named entity recognizer detects toponyms in GEOWIKI, and candidate locations for each toponym are retrieved from GEONAMES. Each toponym with a location within 10km of the document location is considered a mention of that location. For example, the Empire State Building Wikipedia article has a human-provided location label of (40.75,-73.99). The toponym New York is mentioned several times in the article, and GEONAMES lists a New York at (40.71,-74.01). These points are 4.8km apart, so each mention of New York in the document is considered a reference to New York City. Next, context windows w of twenty words to each side of each toponym are extracted as features. The label for a training instance is the candidate location closest to the document location. We extract 1,489,428 such instances for toponyms relevant to our evaluation corpora. These instances are used to train logistic regression classifiers P(l|t, w) for location l and toponym t. To disambiguate a new toponym, WISTR chooses the location that maximizes this probability. Few such probabilistic toponym resolvers exist in the literature. Li (2007) builds a probability distribution over locations for each toponym, but still relies on nearby toponyms that could refer to regions that contain that toponym and requires hand construction of distributions. Other learning approaches to toponym resolution (e.g. Smith and Mann (2003)) require explicit unambiguous mentions like Portland, Maine to construct training instances, while our data gathering methodology does not make such an assumption. Overell and R¨uger (2008) and Overell (2009) only use nearby toponyms as features. Mani et al. (2010) and Qin et al. (2010) use other word types but only in a local context, and they require toponymlabeled training data. Our approach makes use of all words in local and document context and requires no explicitly labeled toponym tokens. TRAWL We bring TRIPDL, WISTR, and standard toponym resolution cues about administrative levels together with TRAWL (Toponym Resolution via Administrative levels and Wikipedia Locations). The general form of a probabilistic resolver that utilizes such information to select a location ˆl for a toponym t in document d may be defined as ˆl = arg maxl P(l, al|t, d). where al is the administrative level (country, state, city) for l in the gazetteer. This captures the fact that countries (like Sudan) tend to be referred to more often than small cities (like Sudan, Texas). The above term is simplified as follows: P(l, al|t, d) = P(al|t, d)P(l|al, t, d) ≈ P(al|t)P(l|t, d) where we approximate the administrative level prediction as independent of the document, and the location as independent of administrative level. The latter term is then expressed as a linear combination of the local context (WISTR) and the document context (TRIPDL): P(l|t, d) = λtP(l|t, ct) + (1−λt)PDL(l|t, d). λt, the weight of the local context distribution, is set according to the confidence that a prediction based on local context is correct: λt = f(t) f(t)+C , where f(t) is the fraction of training instances of toponym t of all instances extracted from GEOWIKI. C is set experimentally; C=.0001 was the optimal value for CWAR-DEV. Intuitively, the larger C is, the greater f(t) must be for the local context to be trusted over the document context. We define P(a|t), the administrative level component, to be the fraction of representative points for a location ˆl out of the number of representatives points for all candidate locations l ∈t, ||Rˆl|| P l′∈t ||Rl′|| 1470 where ||Rl|| is the number of representative points of l. This boosts states and countries since higher probability is assigned to locations with more points (and cities have just one point). Taken together, the above definitions yield the TRAWL resolver, which selects the optimal candidate location ˆl according to ˆl = arg maxl P(al|t)(λtP(l|t, ct) + (1−λt)PDL(l|t, d)). 3.4 Combining Resolvers and Backoff SPIDER begins with uniform weights for each candidate location of each toponym. WISTR and TRAWL both output distributions over these locations based on outside knowledge sources, and can be used as more informed initializations of SPIDER than the uniform ones. We call these combinations WISTR+SPIDER and TRAWL+SPIDER.6 WISTR fails to predict when encountering a toponym it has not seen in the training data, and TRIPDL fails when a toponym only has locations in cells with no probability mass. TRAWL fails when both of these are true. In these cases, we select the candidate location geographically closest to the most likely cell according to TRIPDL’s P(c|d) distribution. 3.5 Document Size For SPIDER, runtime is quadratic in the size of documents, so breaking up documents vastly reduces runtime. It also restricts the minimality heuristic—appropriately—to smaller spans of text. For resolvers that take into account the surrounding document when determining how to resolve a toponym, such as TRIPDL and TRAWL, it can often be beneficial to divide documents into smaller subdocuments in order to get a better estimate of the overall geographic prominence of the text surrounding a toponym, but at a more coarsegrained level than the local context models provide. For these reasons, we simply divide each book in the CWAR corpus into small subdocuments of at most 20 sentences. 4 Evaluation Many prior efforts use a simple accuracy metric: the fraction of toponyms whose predicted location 6We scale each toponym’s distribution as output by WISTR or TRAWL by the number of candidate locations for that toponym, since the total weight for each toponym in SPIDER is the number of candidate locations, not 1. is the same as the gold location. Such a metric can be problematic, however. The gazetteer used by a resolver may not contain, for a given toponym, a location whose latitude and longitude exactly match the gold label for the toponym (Leidner, 2008). Also, some errors are worse than others, e.g. predicting a toponym’s location to be on the other side of the world versus predicting it to be a different city in the same country—accuracy does not reflect this difference. We choose a metric that instead measures the distance between the correct and predicted location for each toponym and compute the mean and median of all such error distances. This is used in document geolocation work (Eisenstein et al., 2010, 2011; Wing and Baldridge, 2011; Roller et al., 2012) and is related to the root mean squared distance metric discussed by Leidner (2008). It is important to understand performance on plain text (without gold toponyms), which is the typical use case for applications using toponym resolvers. Both the accuracy metric and the errordistance metric encounter problems when the set of predicted toponyms is not the same as the set of gold toponyms (regardless of locations), e.g. when a named entity recognizer is used to identify toponyms. In this case, we can use precision and recall, where a true positive is defined as the prediction of a correctly identified toponym’s location to be as close as possible to its gold label, given the gazetteer used. False positives occur when the NER incorrectly predicts a toponym, and false negatives occur when it fails to predict a toponym identified by the annotator. When a correctly identified toponym receives an incorrect location prediction, this counts as both a false negative and a false positive. We primarily present results from experiments with gold toponyms but include an accuracy measure for comparability with results from experiments run on plain text with a named entity recognizer. This accuracy metric simply computes the fraction of toponyms that were resolved as close as possible to their gold label given the gazetteer. 5 Results Table 2 gives the performance of the resolvers on the TR-CONLL and CWAR test sets when gold toponyms are used. Values for RANDOM and SPIDER are averaged over three trials. The ORACLE row gives results when the candidate 1471 Resolver TRC-TEST CWAR-TEST Mean Med. A Mean Med. A ORACLE 105 19.8 100.0 0.0 0.0 100.0 RANDOM 3915 1412 33.5 2389 1027 11.8 POPULATION 216 23.1 81.0 1749 0.0 59.7 SPIDER10 2180 30.9 55.7 266 0.0 57.5 TRIPDL 1494 29.3 62.0 847 0.0 51.5 WISTR 279 22.6 82.3 855 0.0 69.1 WISTR+SPIDER10 430 23.1 81.8 201 0.0 85.9 TRAWL 235 22.6 81.4 945 0.0 67.8 TRAWL+SPIDER10 297 23.1 80.7 148 0.0 78.2 Table 2: Accuracy and error distance metrics on test sets with gold toponyms. Figure 2: Visualization of how SPIDER clumps most predicted locations in the same region (above), on the CWAR-DEV corpus. TRAWL’s output (below) is much more dispersed. from GEONAMES closest to the annotated location is always selected. The ORACLE mean and median error values on TR-CONLL are nonzero due to errors in the annotations and inconsistencies stemming from the fact that coordinates from GEONAMES were not used in the annotation of TR-CONLL. On both datasets, SPIDER achieves errors and accuracies much better than RANDOM, validating the intuition that authors tend to discuss places near each other more often than not, while some locations are more prominent in a given corpus despite violating the minimality heuristic. The text-driven resolvers vastly outperform SPIDER, showing the effectiveness of textual cues for toponym resolution. The local context resolver WISTR is very effective: it has the highest accuracy for TR-CONLL, though two other text-based resolvers also beat the challenging POPULATION baseline’s accuracy. TRAWL achieves a better mean distance metric for TR-CONLL, and when used to seed SPIDER, it obtains the lowest mean error on CWAR by a large margin. SPIDER seeded with WISTR achieves the highest accuracy on CWAR. The overall geographic scope of CWAR, a collection of documents about the American Civil War, is much smaller than that of TR-CONLL (articles about international events). This makes toponym resolution easier overall (especially error distances) for minimality resolvers like SPIDER, which primarily seek tightly clustered sets of locations. This behavior is quite clear in visualizations of predicted locations such as Figure 2. On the CWAR dataset, POPULATION performs relatively poorly, demonstrating the fragility of population-based decisions for working with historical corpora. (Also, we note that POPULATION is not a resolver per se since it only ever predicts one location for a given toponym, regardless of context.) Table 3 gives results on TRC-TEST when NERidentified toponyms are used. In this case, the ORACLE results are less than 100% due to the limitations of the NER, and represent the best possible results given the NER we used. When resolvers are run on NER-identified toponyms, the text-driven resolvers that use local context again easily beat SPIDER. WISTR achieves the best performance. The named entity recognizer is likely better at detecting common toponyms than rare toponyms due to the na1472 Resolver P R F ORACLE 82.6 59.9 69.4 RANDOM 25.1 18.2 21.1 POPULATION 71.6 51.9 60.2 SPIDER10 40.5 29.4 34.1 TRIPDL 51.8 37.5 43.5 WISTR 73.9 53.6 62.1 WISTR+SPIDER10 73.2 53.1 61.5 TRAWL 72.5 52.5 60.9 TRAWL+SPIDER10 72.0 52.2 60.5 Table 3: Precision, recall, and F-score of resolvers on TRC-TEST with NER-identified toponyms. ture of its training data, and many more local context training instances were extracted from common toponyms than from rare ones in Wikipedia. Thus, our model that uses only these local context models does best when running on NER-identified toponyms. We also measured the mean and median error distance for toponyms correctly identified by the named entity recognizer, and found that they tended to be 50-200km worse than for gold toponyms. This also makes sense given the named entity recognizer’s tendency to detect common toponyms: common toponyms tend to be more ambiguous than others. Results on TR-CONLL indicate much higher performance than the resolvers presented by Leidner (2008), whose F-scores do not exceed 36.5% with either gold or NER toponyms.7 TRC-TEST is a subset of the documents Leidner uses (he did not split development and test data), but the results still come from overlapping data. The most direct comparison is SPIDER’s F-score of 39.7% compared to his LSW03 algorithm’s 35.6% (both are minimality resolvers). However, our evaluation is more penalized since SPIDER loses precision for NER’s false positives (Jack London as a location) while Leidner only evaluated on actual locations. It thus seems fair to conclude that the text-driven classifiers, with F-scores in the mid-50’s, are much more accurate on the corpus than previous work. 6 Error Analysis Table 4 shows the ten toponyms that caused the greatest total error distances from TRC-DEV with gold toponyms when resolved by TRAWL, the resolver that achieves the lowest mean error on that 7Leidner (2008) reports precision, recall, and F-score values even with gold toponyms, since his resolvers can abstain. dataset among all our resolvers. Washington, the toponym contributing the most total error, is a typical example of a toponym that is difficult to resolve, as there are two very prominent locations within the United States with the name. Choosing one when the other is correct results in an error of over 4000 kilometers. This occurs, for example, when TRAWL chooses Washington state in the phrase Israel’s ambassador to Washington, where more knowledge about the status of Washington, D.C. as the political center of the United States (e.g. in the form of more or better contextual training instances) could overturn the administrative level component’s preference for states. An instance of California in a baseball-related news article is incorrectly predicted to be the town California, Pennsylvania. The context is: ...New York starter Jimmy Key left the game in the first inning after Seattle shortstop Alex Rodriguez lined a shot off his left elbow. The Yankees have lost 12 of their last 19 games and their lead in the AL East over Baltimore fell to five games. At California, Tim Wakefield pitched a six-hitter for his third complete game of the season and Mo Vaughn and Troy O’Leary hit solo home runs in the second inning as the surging Boston Red Sox won their third straight 4-1 over the California Angels. Boston has won seven of eight and is 20-6... The presence of many east coast cues—both toponym and otherwise—make it unsurprising that the resolver would predict California, Pennsylvania despite the administrative level component’s heavier weighting of the state. The average errors for the toponyms Australia and Russia are fairly small and stem from differences in how countries are represented across different gazetteers, not true incorrect predictions. Table 5 shows the toponyms with the greatest errors from CWAR-DEV with gold toponyms when resolved by WISTR+SPIDER. Rome is sometimes predicted as cities in Italy and other parts of Europe rather than Rome, Georgia, though it correctly selects the city in Georgia more often than not due to SPIDER’s preference for tightly clumped sets of locations. Mexico, however, frequently gets incorrectly selected as a city in Maryland near many other locations in the corpus when TRAWL’s administrative level component is not present. Many other of the toponyms contributing to the total error such as Jackson and Lexington are 1473 Toponym N Mean Total Washington 25 3229 80717 Gaza 12 5936 71234 California 8 5475 43797 Montana 3 11635 34905 WA 3 11221 33662 NZ 2 14068 28136 Australia 88 280 24600 Russia 72 260 18712 OR 2 9242 18484 Sydney 12 1422 17067 Table 4: Toponyms with the greatest total error distances in kilometers from TRC-DEV with gold toponyms resolved by TRAWL. N is the number of instances, and the mean error for each toponym type is also given. Toponym N Mean Total Mexico 1398 2963 4142102 Jackson 2485 1210 3007541 Monterey 353 2392 844221 Haymarket 41 15663 642170 McMinnville 145 3307 479446 Alexandria 1434 314 450863 Eastport 184 2109 388000 Lexington 796 442 351684 Winton 21 15881 333499 Clinton 170 1401 238241 Table 5: Top errors from CWAR-DEV resolved by TRAWL+SPIDER. simply the result of many American towns sharing the same names and a lack of clear disambiguating context. 7 Conclusion Our text-driven resolvers prove highly effective for both modern day newswire texts and 19th century texts pertaining to the Civil War. They easily outperform standard minimality toponym resolvers, but can also be combined with them. This strategy works particularly well when predicting toponyms on a corpus with relatively restricted geographic extents. Performance remains good when resolving toponyms identified automatically, indicating that end-to-end systems based on our models may improve the experience of digital humanities scholars interested in finding and visualizing toponyms in large corpora. Acknowledgements We thank: the three anonymous reviewers, Grant DeLozier, and the UT Austin Natural Language Learning reading group, for their helpful feedback; Ben Wing, for his document geolocation software; Jochen Leidner, for providing the TR-CONLL corpus as well as feedback on earlier versions of this paper; and Scott Nesbit, for providing the annotations for the CWAR corpus. This research was supported by a grant from the Morris Memorial Trust Fund of the New York Community Trust. References B. Adams and G. McKenzie. Inferring thematic places from spatially referenced natural language descriptions. Crowdsourcing Geographic Knowledge, pages 201–221, 2013. E. Amitay, N. Har’El, R. Sivan, and A. Soffer. Web-a-Where: geotagging web content. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 273–280, 2004. D. Buscaldi and P. Rosso. A conceptual densitybased approach for the disambiguation of toponyms. International Journal of Geographical Information Science, 22(3):301–313, 2008. P. Clough. Extracting metadata for spatiallyaware information retrieval on the internet. In Proceedings of the 2005 workshop on Geographic information retrieval, pages 25–30. ACM, 2005. G. Crane. The Perseus Digital Library, 2000. URL http://www.perseus.tufts.edu. J. Ding, L. Gravano, and N. Shivakumar. Computing geographical scopes of web resources. In Proceedings of the 26th International Conference on Very Large Data Bases, pages 545–556, 2000. J. Eisenstein, B. O’Connor, N. Smith, and E. Xing. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1277–1287, 2010. J. Eisenstein, A. Ahmed, and E. Xing. Sparse additive generative models of text. In Proceedings of the 28th International Conference on Machine Learning, pages 1041–1048, 2011. 1474 J. Gelernter and N. Mushegian. Geo-parsing messages from microtext. Transactions in GIS, 15 (6):753–773, 2011. C. Grover, R. Tobin, K. Byrne, M. Woollard, J. Reid, S. Dunn, and J. Ball. Use of the Edinburgh geoparser for georeferencing digitized historical collections. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 368(1925): 3875–3889, 2010. J. Guldi. The spatial turn. Spatial Humanities: a Project of the Institute for Enabling, 2009. Q. Hao, R. Cai, C. Wang, R. Xiao, J. Yang, Y. Pang, and L. Zhang. Equip tourists with knowledge mined from travelogues. In Proceedings of the 19th international conference on World wide web, pages 401–410, 2010. B. Hecht, S. Carton, M. Quaderi, J. Sch¨oning, M. Raubal, D. Gergle, and D. Downey. Explanatory semantic relatedness and explicit spatialization for exploratory search. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, pages 415–424. ACM, 2012. L. Hill. Georeferencing: The Geographic Associations of Information. MIT Press, 2006. J. Hoffart, M. Yosef, I. Bordino, H. F¨urstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, and G. Weikum. Robust disambiguation of named entities in text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 782–792. Association for Computational Linguistics, 2011. L. Hollenstein and R. Purves. Exploring place through user-generated content: Using Flickr tags to describe city cores. Journal of Spatial Information Science, (1):21–48, 2012. S. Intagorn and K. Lerman. A probabilistic approach to mining geospatial knowledge from social annotations. In Conference on Information and Knowledge Management (CIKM), 2012. C. Jones, R. Purves, P. Clough, and H. Joho. Modelling vague places with knowledge from the web. International Journal of Geographical Information Science, 2008. S. Kulkarni, A. Singh, G. Ramakrishnan, and S. Chakrabarti. Collective annotation of Wikipedia entities in web text. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 457–466. ACM, 2009. S. Ladra, M. Luaces, O. Pedreira, and D. Seco. A toponym resolution service following the OGC WPS standard. In Web and Wireless Geographical Information Systems, volume 5373, pages 75–85. 2008. J. Leidner. Toponym resolution in text: Annotation, Evaluation and Applications of Spatial Grounding of Place Names. Universal Press, Boca Raton, FL, USA, 2008. H. Li, R. Srihari, C. Niu, and W. Li. InfoXtract location normalization: a hybrid approach to geographic references in information extraction. In Proceedings of the HLT-NAACL 2003 workshop on Analysis of geographic references - Volume 1, pages 39–44, 2003. Y. Li. Probabilistic toponym resolution and geographic indexing and querying. Master’s thesis, The University of Melbourne, Melbourne, Australia, 2007. V. Loureiro, I. Anast´acio, and B. Martins. Learning to resolve geographical and temporal references in text. In Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 349–352, 2011. M. Louwerse and N. Benesh. Representing spatial structure through maps and language: Lord of the Rings encodes the spatial structure of Middle Earth. Cognitive science, 36(8):1556–1569, 2012. I. Mani, C. Doran, D. Harris, J. Hitzeman, R. Quimby, J. Richer, B. Wellner, S. Mardis, and S. Clancy. SpatialML: annotation scheme, resources, and evaluation. Language Resources and Evaluation, 44(3):263–280, 2010. S. Overell. Geographic Information Retrieval: Classification, Disambiguation and Modelling. PhD thesis, Imperial College London, 2009. S. Overell and S. R¨uger. Using co-occurrence models for placename disambiguation. International Journal of Geographical Information Science, 22:265–287, 2008. Y. Pang, Q. Hao, Y. Yuan, T. Hu, R. Cai, and L. Zhang. Summarizing tourist destinations by mining user-generated travelogues and pho1475 tos. Computer Vision and Image Understanding, 115(3):352 – 363, 2011. V. Petras. Statistical analysis of geographic and language clues in the MARC record. Technical report, The University of California at Berkeley, 2004. T. Qin, R. Xiao, L. Fang, X. Xie, and L. Zhang. An efficient location extraction algorithm by leveraging web contextual information. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 53–60. ACM, 2010. E. Rauch, M. Bukatin, and K. Baker. A confidence-based framework for disambiguating geographic terms. In Proceedings of the HLT-NAACL 2003 workshop on Analysis of geographic references - Volume 1, pages 50–54, 2003. K. Roberts, C. Bejan, and S. Harabagiu. Toponym disambiguation using events. In Proceedings of the 23rd International Florida Artificial Intelligence Research Society Conference, pages 271– 276, 2010. S. Roller, M. Speriosu, S. Rallapalli, B. Wing, and J. Baldridge. Supervised text-based geolocation using language models on an adaptive grid. In Proceedings of EMNLP 2012, 2012. J. Sankaranarayanan, H. Samet, B. Teitler, M. Lieberman, and J. Sperling. TwitterStand: news in tweets. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 42–51, 2009. W. Scheidel, E. Meeks, and J. Weiland. ORBIS: The Stanford geospatial network model of the roman world. 2012. A. Skupin and A. Esperb´e. An alternative map of the United States based on an n-dimensional model of geographic space. Journal of Visual Languages & Computing, 22(4):290–304, 2011. D. Smith and G. Crane. Disambiguating geographic names in a historical digital library. In Proceedings of the 5th European Conference on Research and Advanced Technology for Digital Libraries, pages 127–136, 2001. D. Smith and G. Mann. Bootstrapping toponym classifiers. In Proceedings of the HLT-NAACL 2003 workshop on Analysis of geographic references - Volume 1, pages 45–49, 2003. B. Teitler, M. Lieberman, D. Panozzo, J. Sankaranarayanan, H. Samet, and J. Sperling. NewsStand: a new view on news. In Proceedings of the 16th ACM SIGSPATIAL international conference on Advances in geographic information systems, page 18. ACM, 2008. R. Volz, J. Kleb, and W. Mueller. Towards ontology-based disambiguation of geographical identifiers. In Proceedings of the 16th International Conference on World Wide Web, 2007. B. Wing and J. Baldridge. Simple supervised document geolocation with geodesic grids. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 955–964, 2011. Q. Zhang, P. Jin, S. Lin, and L. Yue. Extracting focused locations for web pages. In Web-Age Information Management, volume 7142, pages 76–89. 2012. W. Zong, D. Wu, A. Sun, E. Lim, and D. Goh. On assigning place names to geography related web pages. In Proceedings of the 5th ACM/IEEECS joint conference on Digital libraries, pages 354–362, 2005. 1476
2013
144
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1477–1487, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Argument Inference from Relevant Event Mentions in Chinese Argument Extraction Peifeng Li, Qiaoming Zhu, Guodong Zhou* School of Computer Science & Technology Soochow University, Suzhou, 215006, China {pfli, qmzhu, gdzhou}@suda.edu.cn Abstract As a paratactic language, sentence-level argument extraction in Chinese suffers much from the frequent occurrence of ellipsis with regard to inter-sentence arguments. To resolve such problem, this paper proposes a novel global argument inference model to explore specific relationships, such as Coreference, Sequence and Parallel, among relevant event mentions to recover those intersentence arguments in the sentence, discourse and document layers which represent the cohesion of an event or a topic. Evaluation on the ACE 2005 Chinese corpus justifies the effectiveness of our global argument inference model over a state-of-the-art baseline. 1 Introduction The task of event extraction is to recognize event mentions of a predefined event type and their arguments (participants and attributes). Generally, it can be divided into two subtasks: trigger extraction, which aims to identify trigger/event mentions and determine their event type, and argument extraction, which aims to extract various arguments of a specific event and assign the roles to them. In this paper, we focus on argument extraction in Chinese event extraction. While most of previous studies in Chinese event extraction deal with Chinese trigger extraction (e.g., Chen and Ji, 2009a; Qin et al., 2010; Li et al., 2012a, 2012b), there are only a few on Chinese argument extraction (e.g., Tan et al., 2008; Chen and Ji, 2009b). Following previous studies, we divide argument extraction into two components, argument identification and role determination, where the former recognizes the arguments in a specific event mention and the latter classifies these arguments by roles. With regard to methodology, most of previous studies on argument extraction recast it as a Semantic Role Labeling (SRL) task and focus on intra-sentence information to identify the arguments and their roles. However, argument extraction is much different from SRL in the sense that, while the relationship between a predicate and its arguments in SRL can be mainly decided from the syntactic structure, the relationship between an event trigger and its arguments are more semantics-based, especially in Chinese, as a paratactic (e.g., discourse-driven and pro-drop) language with the wide spread of ellipsis and the open flexible sentence structure. Therefore, some arguments of a specific event mention are far away from the trigger and how to recover those inter-sentence arguments becomes a challenging issue in Chinese argument extraction. Consider the following discourse (from ACE 2005 Chinese corpus) as a sample: D1: 巴勒斯坦自治政府否认和加沙走廊20 号 清晨造成两名以色列人丧生(E1)的炸弹攻击 (E2)事件有关…表示将对这起攻击(E3)事件展 开调查。(The Palestinian National Authority denied any involvement in the bomb attack (E2) occurred in the Gaza Strip on the morning of the 20th, which killed (E1) two Israelites. … They claimed that they will be investigating this attack (E3).) - From CBS20001120.1000.0823 In above discourse, there are three event mentions, one kill (E1) and two Attack (E2, E3). While it is relatively easy to identify 20 号清晨 (morning of 20th), 加沙走廊 (Gaza Strip) and 炸 弹 (bomb) as the Time, Place and Instrument roles in E2 by a sentence-based argument 1477 extractor, it is really challenging to recognize these entities as the arguments of its corefered mention E3 since to reduce redundancy in a Chinese discourse, the later Chinese sentences omit many of these entities already mentioned in previous sentences. Similarly, it is hard to recognize 两名以色列人 (two Israelites) as the Target role for event mention E2 and identify 炸 弹 (bomb) as the Instrument role for event mention E1. An alternative way is to employ various relationships among relevant event mentions in a discourse to infer those intersentence arguments. The contributions of this paper are: 1) We propose a novel global argument inference model, in which various kinds of event relations are involved to infer more arguments on their semantic relations. 2) Different from Liao and Grishman (2010) and Hong et al. (2011), which only consider document-level consistency, we propose a more fine-gained consistency model to enforce the consistency in the sentence, discourse and document layers. 3) We incorporate argument semantics into our global argument inference model to unify the semantics of the event and its arguments. The rest of this paper is organized as follows. Section 2 overviews the related work. Section 3 describes a state-of-the-art Chinese argument extraction system as the baseline. Section 4 introduces our global model in inferring those inter-sentence arguments. Section 5 reports experimental results and gives deep analysis. Finally, we conclude our work in Section 6. 2 Related Work Almost all the existing studies on argument extraction concern English. While some apply pattern-based approaches (e.g., Riloff, 1996; Califf and Mooney, 2003; Patwardhan and Riloff, 2007; Chambers and Jurafsky, 2011), the others use machine learning-based approaches (e.g., Grishman et al., 2005; Ahn, 2006; Patwardhan and Riloff, 2009; Lu and Roth, 2012), most of which rely on various kinds of features in the context of a sentence. In comparison, there are only a few studies exploring inter-sentence information or argument semantics (e.g., Liao and Grishman, 2010; Hong et al., 2011; Huang and Riloff, 2011, 2012). Compared with the tremendous work on English event extraction, there are only a few studies (e.g., Tan et al., 2008; Chen and Ji, 2009b; Fu et al., 2010; Qin et al., 2010; Li et al., 2012) on Chinese event extraction with focus on either feature engineering or trigger expansion, under the same framework as English trigger identification. In additional, there are only very few of them focusing on Chinese argument extraction and almost all aim to feature engineering and are based on sentence-level information and recast this task as an SRL-style task. Tan et al. (2008) introduce multiple levels of patterns to improve the coverage in Chinese argument classification. Chen and Ji (2009b) apply various kinds of lexical, syntactic and semantic features to address the special issues in Chinese argument extraction. Fu et al. (2010) use a feature weighting scheme to re-weight various features for Chinese argument extraction. Li et al. (2012b) introduce more refined features to the system of Chen and Ji (2009b) as their baseline. Specially, several studies have successfully incorporated cross-document or document-level information and argument semantics into event extraction, most of them focused on English. Yangarber et al. (2007) apply a crossdocument inference mechanism to refine local extraction results for the disease name, location and start/end time. Mann (2007) proposes some constraints on relationship rescoring to impose the discourse consistency on the CEO’s personal information. Chambers and Jurafsky (2008) propose a narrative event chain which are partially ordered sets of event mentions centered around a common protagonist and this chain can represent the relationship among the relevant event mentions in a document. Ji and Grishman (2008) employ a rule-based approach to propagate consistent triggers and arguments across topic-related documents. Liao and Grishman (2010) mainly focus on employing the cross-event consistency information to improve sentence-level trigger extraction and they also propose an inference method to infer the arguments following role consistency in a document. Hong et al. (2011) employ the background information to divide an entity type into more cohesive subtypes to create the bridge between two entities and then infer arguments and their roles using cross-entity inference on the subtypes of entities. Huang and Rillof (2012) propose a sequentially structured sentence classifier which uses lexical associations and discourse relations across sentences to identify event-related document contexts and then apply it to recognize arguments and their roles on the relation among triggers and arguments. 1478 3 Baseline In the task of event extraction as defined in ACE evaluations, an event is defined as a specific occurrence involving participants (e.g., Person, Attacker, Agent, Defendant) and attributes (e.g., Place, Time). Commonly, an event mention is triggered via a word (trigger) in a phrase or sentence which clearly expresses the occurrence of a specific event. The arguments are the entity mentions involved in an event mention with a specific role, the relation of an argument to an event where it participates. Hence, extracting an event consists of four basic steps, identifying an event trigger, determining its event type, identifying involved arguments (participants and attributes) and determining their roles. As the baseline, we choose a state-of-the-art Chinese event extraction system, as described in Li et al. (2012b), which consists of four typical components: trigger identification, event type determination, argument identification and role determination. In their system, the former two components, trigger identification and event type determination, are processed in a joint model, where the latter two components are run in a pipeline way. Besides, the Maximum-Entropy (ME) model is employed to train individual component classifiers for above four components. This paper focuses on argument identification and role determination. In order to provide a stronger baseline, we introduce more refined features in such two components, besides those adopted in Li et al. (2012b). Following is a list of features adopted in our baseline. 1) Basic features: trigger, POS (Part Of Speech) of the trigger, event type, head word of the entity, entity type, entity subtype; 2) Neighbouring features: left neighbouring word of the entity + its POS, right neighbour word of the entity + its POS, left neighbour word of the trigger + its POS, right neighbour word of the trigger + its POS; 3) Dependency features: dependency path from the entity to the trigger, depth of the dependency path; 4) Syntactic features: path from the trigger to the entity, difference of the depths of the trigger and entity, place of the entity (before trigger or after trigger), depth of the path from the trigger to the entity, siblings of the entity; 5) Semantic features: semantic role of the entity tagged by an SRL tool (e.g., ARG0, ARG1) (Li et al., 2010), sememe of trigger in Hownet (Dong and Dong, 2006). 4 Inferring Inter-Sentence Arguments on Relevant Event Mentions In this paper, a global argument inference model is proposed to infer those inter-sentence arguments and their roles, incorporating with semantic relations between relevant event mention pairs and argument semantics. 4.1 Motivation It’s well-known that Chinese is a paratactic language, with an open flexible sentence structure and often omits the subject or the object, while English is a hypotactic language with a strict sentence structure and emphasizes on cohesion between clauses. Hence, there are two issues in Chinese argument extraction, associated with its nature of the paratactic language. The first is that many arguments of an event mention are out of the event mention scope since ellipsis is a common phenomenon in Chinese. We call them inter-sentence arguments in this paper. Table 1 gives the statistics of intrasentence and inter-sentence arguments in the ACE 2005 Chinese corpus and it shows that 20.8% of the arguments are inter-sentence ones while this figure is less than 1% of the ACE 2005 English corpus. The main reason of that difference is that some Chinese arguments are omitted in the same sentence of the trigger since Chinese is a paratactic language with the wide spread of ellipsis. Besides, a Chinese sentence does not always end with a full stop. In particular, a comma is used frequently as the stop sign of a sentence in Chinese. We detect sentence boundaries, relying on both full stop and comma signs, since in a Chinese document, comma can be also used to sign the end of a sentence. In particular, we detect sentence boundaries on full stop, exclamatory mark and question mark firstly. Then, we identify the sentence boundaries on comma, using a binary classifier with a set of lexical and constituent-based syntactic features, similar to Xue and Yang (2010). Category Number #Arguments 8032 #Inter-sentence 1673(20.8%) #Intra-sentence 6359(79.2%) Table 1. Statistics: Chinese argument extraction with regard to intra- sentence and inter-sentence arguments. The second issue is that the Chinese word order in a sentence is rather agile for the open 1479 flexible sentence structure. Hence, different word orders can often express the same semantics. For example, a Die event mention “Three person died in this accident.” can be expressed in many different orders in Chinese, such as “在事故中三 人死亡。”, “事故中死亡三人。”, “三人在事故 中死亡。”, etc. In a word, above two issues indicate that syntactic feature-based approaches are limited in identifying Chinese arguments and it will lead to low recall in argument identification. Therefore, employing those high level information to capture the semantic relation, not only the syntactic structure, between the trigger and its long distance arguments is the key to improve the performance of the Chinese argument identification. Unfortunately, it is really hard to find their direct relations since they always appear in different clauses or sentences. An alternative way is to link the different event mentions with their predicates (triggers) and use the trigger as a bridge to connect the arguments to the trigger in another event mention indirectly. Hence, the semantic relations among event mentions are helpful to be a bridge to identify those inter-sentence arguments. 4.2 Relations of Event Mention Pairs In a discourse, most event mentions are surrounding a specific topic. It’s obvious that those mentions have the intrinsic relationships to reveal the essential structure of a discourse. Those relevant semantics-based relations are helpful to infer the arguments for a specific trigger mention when the syntactic relations in Chinese argument extraction are not as effective as that in English. In this paper, we divide the relations among relevant event mentions into three categories: Coreference, Sequence and Parallel. An event may have more than one mention in a document and coreference event mentions refer to the same event, as same as the definition in the ACE evaluations. Those coreference event mentions always have the same arguments and roles. Therefore, employing this relation can infer the arguments of an event mention from their Coreference ones. For example, we can recover the Time, Place and Instrument for E3 via its Coreference mention E2 in discourse D1, mentioned in Section 1. Li et al. (2012a) find out that sometimes two trigger mentions are within a Chinese word whose morphological structure is Coordination. Take the following sentence as a sample: D2: 一名17 岁的少年劫持一辆巴士,刺(E4) 死(E5) 一名妇女。(A 12-year-old younger hijacked a bus and then stabbed (E4) a woman to death (E5).) - From ZBN20001218.0400.0005 In D2, 刺死 (stab a person to death) is a trigger with the Coordination structure and can be divided into two single-morpheme words 刺 (stab) and 死 (die) while the former triggers an Attack event and the latter refers to a Die one. It’s interesting that they share all arguments in this sentence. The relation between those event mentions whose triggers merge a Chinese word or share the subject and the object are Parallel. For the errors in the syntactic parsing, the second single-morpheme trigger is often assigned a wrong tag (e.g., NN, JJ) and this leads to the errors in the argument extraction. Therefore, inferring the arguments of the second singlemorpheme trigger from that of the first one based on Parallel relation is also an available way to recover arguments. Like that the topic is an axis in a discourse, the relations among those relevant event mentions with the different types is the bone to link them into a narration. There are a few studies on using the event relations in NLP (e.g., summarization (Li et al., 2006), learning narrative event chains (Chambers and Jurafsky, 2007)) to ensure its effectiveness. In this paper, we define two types of Sequence relations of relevant event mentions: Cause and Temporal for their high probabilities of sharing arguments. The Cause relation between the event mentions are similar to that in the Penn Discourse TreeBank 2.0 (Prasad et al., 2008). For example, an Attack event often is the cause of an Die or Injure event. Our Temporal relation is limited to those mentions with the same or relevant event types (e.g., Transport and Arrest) for the high probabilities of sharing arguments. Take the following discourse as a sample: D3: 这批战俘离开(E6)阿尔及利亚西部城市廷 杜夫前往(E7)摩洛哥西南部城市阿加迪尔。 (These prisoners left (E6) Tindouf, a western city of Algeria, and went (E7) to Agadir, a southwestern city of Morocco.) - From Xin20001215.2000.0158 In D3, there are two Transport mentions and it is natural to infer 阿加迪尔 (Agadir) as the Destination role of E6 and 廷杜夫 (Tindouf) as the Origin role of E7 via their Sequence relation. 1480 4.3 Identifying Relations of Event Mention Pairs Currently, there are only few studies focusing on such area (e.g., Ahn, 2006; Chamber and Jurafsky, 2007; Huang and Rillof, 2012; Do et al., 2012) and their approaches cannot be introduced to our system directly for the language nature and the different goal. We try to achieve a higher accuracy in this stage so that our argument inference can recover more true arguments. Inspired by Li and Zhou (2012), we also use the morphological structure to identify the Parallel relation. Two parallel event mentions with the adjacent trigger mentions w1 and w2 must satisfy follows two conditions: 1) Morph(w1,w2) is Coordination 2) j i T w HM T w HM j i ≠ ∈ ∈ ) ( , ) ( 2 1 where Morph(w1,w2) is a function to recognize the morphological structure of joint word w1w2, HM(wi) is to identify the head morpheme 1 in word wi and Ti is the set of the head morphemes with ith event type. These constraints are enlightened by the fact that only Chinese words with Coordination structure can be divided into two new words and each word can trigger an event with the different event type 2 . The implementation of Morph(w1,w2) and HM(w) are described in Li and Zhou (2012). The Coreference relation is divided into two types: Noun-based Coreference (NC) and Eventbased Coreference (EC) while the former always uses a verbal noun to refer to an event mentioned in current or previous sentence and the latter is that an event is mentioned twice or more actually. For example, the relation between E2 and E3 in D1 is NC while the trigger of E3 is only a verbal noun without any direct arguments and it refers to E2. We adopt a simple rule to recognize those NC relations: for each event mention whose trigger is a noun and doesn’t act as the subject/object, we regard their relation as NC if there is another event mention with the same trigger in current or previous sentence. Inspired by Ahn (2006), we use the following conditions to infer the EC relations between two event mentions with the same event type: 1) Their trigger mentions refer to the same trigger; 2) They have at least one same or similar 1 It acts as the governing semantic element in a Chinese word. 2 If they have the same event type, they will be regarded as a single event mention. subject/object; 3) The score of cosine similarity of two event mentions is more than a threshold3. Finally, for the Sequence relation, instead of identifying and classifying the relations clearly and correctly, our goal is to identify whether there are relevant event mentions in a long sentence or two adjacent short sentences who share arguments. Algorithm 1 illustrates a knowledge-based approach to identify the Sequence event relation in a discourse for any two trigger mentions tri1 and tri2 as follows: Algorithm 1 1: input: tri1 and tri2 and their type et1 and et2 2: output: whether their relation is Sequence 3: begin 4: hm1 ←HM(tri1); hm2 ←HM(tri2) 5: MP ←FindAllMP(hm1,et1,hm2,et2) 6: for any mpi in MP 7: if ShareArg(mpi) is true then 8: return true // Sequence 9: end if 10: end for 11: return false 12: end In algorithm 1, HM(tri) is to identify the head morpheme in trigger tri and FindAllMP(hm1, et1, hm2, et2) is to find all event mention pairs in the training set which satisfy the condition that their head morphemes are hm1 and hm2, and their event types are et1 and et2 respectively. Besides, ShareArg(mpi)is used to identify whether the event mention pair mpi sharing at least one argument. In this algorithm, since the relations on the event types are too coarse, we introduce a more fine-gained Sequence relation both on the event types and the head morphemes of the triggers which can divide an event type into many subtypes on the head morpheme. Li and Zhou (2012) have ensured the effectiveness of using head morpheme to infer the triggers and our experiment results also show it is helpful for identifying relevant event mentions which aims to the higher accuracy. 4.4 Global Argument Inference Model Our global argument inference model is composed of two steps: 1) training two sentencebased classifiers: argument identifier (AI) and role determiner (RD) that estimate the score of a candidate acts as an argument and belongs to a 3 The threshold is tuned to 0.78 on the training set. 1481 specific role following Section 3. 2) Using the scores of two classifiers and the event relations in a sentence, a discourse or a document, we perform global optimization to infer those missing or long distance arguments and their roles. To incorporate those event relations with our global argument inference model, we regard a document as a tree and divide it into three layers: document, discourse and sentence. A document is composed of a set of the discourses while a discourse contains three sentences. Since almost all arguments (~98%) of a specific event mention in the ACE 2005 Chinese corpus appear in the sentence containing the specific event mention and its two adjacent sentences (previous and next sentences), we only consider these three sentences as a discourse to simplify the process of identifying the scope of a discourse. We incorporate different event relations into our model on the different layer and the goal of our global argument inference model is to achieve the maximized scores over a document on its three layers and two classifiers: AI and RD. The score of document D is defined as )) 1 ))( , ( 1( ) , ( ( ) 1( )) 1 ))( ( 1( ) ( ( ( max arg , , , , , , , , , , , , , , , ^ > < > < ∈ ∈ > < > < ∈ > < > < ∈ ∈ ∈ ∈ > < > < ∈ > < > < ∈ − − + + − + − − + = ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ m Z m Z D m Z m Z D D iI iI j i S j i S k j i T k j i T Z A R m Z Z I Z Z I D iI iI j i S j i S k j i T k j i T Z A Y X Y R E f Y R E f X E f X E f D α α (1) } 1,0 { .. ∈ Z X t s (2) } 1,0 { , ∈ > < m Z Y (3) R m Y X m Z Z ∈ ∀ ≥ > < , (4) ∑ ∈R m m Z Z Y X > < = , (5) where Ii is the ith discourses in document D; S<i,j> is the jth sentences in discourse Ii; T<i,j,k> is the kth event mentions in sentence S<i,j>; A<i,j,k,l> is the lth candidate arguments in event mention T<i,j,k>; Z is used to denote <i,j,k,l>; fI(EZ) is the score of AI identifying entity mention EZ as an argument, where EZ is the lth entity of the kth event mention of the jth sentence of the ith discourse in document D. fD(EZ, Rm) is the score of RD assigning role Rm to argument EZ. Finally, XZ and Y<Z,m> are the indicators denoting whether entity EZ is an argument and whether the role Rm is assigned to entity EZ respectively. Besides, Eq. 4 and Eq. 5 are the inferences to enforce that: 1) if an entity belongs to a role, it must be an argument; 2) if a entity is an argument of a specific event mention, it must have a role. Parallel relation: Sentence-based optimization is used to incorporate the Parallel relation of two event mentions into our model and they share all arguments in a sentence. Since different event type may have different role set, each role in a specific event should be mapped to the corresponding role in its Parallel event when they have the different event type. For example, the argument “一名17 岁的少年” (A 12-yearold younger) in D2 acts as the Attacker role in the Attack event and the Agent role in the Die event. We learn those role-pairs from the training set and Table 2 shows part of the role relations learning from the training set. Event type pair Role pair Attack-Die Attacker-Agent; TargetVictim;… Injure-Die Agent-Agent; VictimVictim;… TransportDemonstrate Artifact-Entity; Destination-Place;… Table 2. Part of role-pairs for those event mention pairs with Parallel relation. To infer the arguments and their roles on the Parallel relation, we enforce the consistency on the role-pair as follows: > < > < × > < > < > < > < > < > < > < > < > < > < = ∧ >∈ < ∧ ∈ ∧ ∈ ∧ ∈ ∧ ∈ ∧ ∈ ∀ = ',' , , , , , ' ' , , ',' , , , , , , , , ' ,' , , , , ' ,',' , , ,, , , ' , , l k j i l k j i h et h et k j i l k j i k j i l k j i j i k j i k j i i j i i m l k j i m l k j i E E RP m m T A T A S T T I S D I Y Y (6) where 'h h et et RP × is the set of role-pairs between two Parallel event mention eth and eth’ and > < > < = ',' , , , , , l k j i l k j i E E means they refer to the same entity mention. With the transitivity between the indicators X and Y, Eq. 6 also enforces the consistency on X<i,j,k,l> and X<i,j,k’,l’>. Coreference relation: Since the NC and EC relcation between two event mentions are different in the event expression, we introduce the discourse-based optimization for the former and document-based optimization for the latter. For two NC mentions, we ensure that the succeeding mentions can inherit the arguments form the previous one. To enforce this consistency, we just replace all fI(EZ) and fD(EZ, Rm) of the succeeding event mention with that of the previous one, since the previous one have the more context information. As for two EC event mentions, algorithm 2 shows how to create the constraints for our 1482 global argument inference model to infer arguments and roles. Algorithm 2 1: input: two event mentions T, T’ and their arguments set A and A’ 2: output: the constraints set C 3: begin 4: for each argument a in A do 5: a’←FindSim(a) 6: if a’≠∅ then 7: ) , ( 'a a Y Y y Consistenc C C ∪ ← 8: end if 9: end for 10: end In algorithm 2, the function FindSim(a) is used to find a similar candidate argument a’ in A’ for a. If it’s found, we enforce the consistency of argument a and a’ in the role by using Consistency(Ya,Ya’) where Ya and Ya’ are the indicators in Eq. 1. To evaluate the similarity between two candidates a and a’, we regard them as similar ones when they are the same word or in the same entity coreference chain. We use a coreference resolution tool to construct the entity coreference chains, as described in Kong et al (2010). Sequence relation: For any two event mentions in a discourse, we use the event type pair with their head morphemes (e.g., Attack:炸 (burst) - Die:死(die), Trial-Hearing:审(trial) - Sentence:判(sentence)) to search the training set and then obtain the probabilities of sharing the arguments as mentioned in algorithm 1. We denoted Pro<et,et’,HM(tri),HM(tri’),Rm,Rm’> as the probability of the trigger mentions tri and tri’ (their event types are et and et’ respectively.) sharing an argument whose roles are Rm and Rm’ respectively. We propose following discoursebased constraint to enforce the consistency between the roles of two arguments, which are related semantically, temporally, causally or conditionally, based on the probability of sharing an argument and the absolute value of the difference between the scores of RD: λ δ > > = ∈ ∈ ∈ ∧ ∈ = > < > < > < > < > < > < > < > < > < > < > < > < ) , ( ) , ( ) , ),' ( ), ( ,' , ( Pr ' , , ∀ ' ',' ,' , , , , ' ',' ,' , , , , ' , ' ,' , , , , ' , , ' ,',' ,' , ,, , , m l k j i D m l k j i D m m l k j i l k j i j i k j i j i k j i i j i j i i m l k j i m l k j i R E f R E f R R tri HM tri HM et et o E E R m m S T S T I S S D I Y Y ∧ ∧ ∧ ∈ ∧ ∧ ∧ (7) where δ and λ are the thresholds learned from the development set; tri and tri’ are triggers of kth and k’th event mention whose event types are et and et’ in S<i,j> and S<i,j’> respectively. 4.5 Incorporating Argument Semantics into Global Argument Inference Model We also introduce the argument semantics, which represent the semantic relations of argument-argument pair, argument-role pair and argument-trigger pair, to reflect the cohesion inside an event. Hong et al. (2011) found out that there is a strong argument and role consistency in the ACE 2005 English corpus. Those consistencies also occur in Chinese and they reveal the relation between the trigger and its arguments, and also explore the relation between the argument and its role. Besides, those entities act as non-argument also have the consistency with high probabilities. To let the global argument inference model combine those knowledges of argument semantics, we compute the prior probabilities P(X<i,j>=1) and P(Y<i,j,m>=1) that entity enj occurrs in a specific event type eti as an argument and its role is Rm respectively. To overcome the sparsity of the entities, we cluster those entities into more cohesive subtype following Hong et al. (2011). Hence, following the independence assumptions described by Berant et al. (2011), we modify the fI(EZ) and fD(EZ,Rm)in Eq. 1 as follows: ) 0 ( ) |1 ( 1( )1 ( ) |1 ( log ) ( = = − = = = Z Z Z Z Z Z Z I X P F X P X P F X P E f (8) ) 0 ( ) | 1 ( 1( )1 ( ) | 1 ( log ) , ( , , , , , , = = − = = = > < > < > < > < > < > < m Z m Z m Z m Z m Z m Z m Z D X P F X P X P F Y P R E f (9) where ) | 1 ( Z Z F X P = and ) |1 ( , , > < > < = m Z m Z F Y P are the probabilities from the AI and AD respectively while FZ and F<Z,m> are the feature vectors. Besides, )1 ( , = > < m Z X P and )1 ( = Z X P are the prior probabilities learning from the training set. 5 Experimentation In this section, we first describe the experimental settings and the baseline, and then evaluate our global argument inference model incorporating with relevant event mentions and argument semantics to infer arguments and their roles. 5.1 Experimental Settings and Baseline For fair comparison, we adopt the same experimental settings as the state-of-the-art event extraction system (Li et al. 2012b) and all the 1483 evaluations are experimented on the ACE 2005 Chinese corpus. We randomly select 567 documents as the training set and the remaining 66 documents as the test set. Besides, we reserve 33 documents in the training set as the development set and use the ground truth entities, times and values for our training and testing. As for evaluation, we also follow the standards as defined in Li et al. (2012b). Finally, all the sentences in the corpus are divided into words using a Chinese word segmentation tool (ICTCLAS) 1 with all entities annotated in the corpus kept. We use Berkeley Parser 2 and Stanford Parser 3 to create the constituent and dependency parse trees. Besides, the ME tool (Maxent) 4 is employed to train individual component classifiers and lp_solver5 is used to construct our global argument inference model. Besides, all the experiments on argument extraction are done on the output of the trigger extraction system as described in Li et al. (2012b). Table 3 shows the performance of the baseline trigger extraction system and Line 1 in Table 4 illustrates the results of argument identification and role determination based on this system. Trigger identification Event type determination P(%) R(%) F1 P(%) R(%) F1 74.4 71.9 73.1 71.4 68.9 70.2 Table 3. Performance of the baseline on trigger identification and event type determination. 5.2 Inferring Arguments on Relevant Event Mentions and Argument Semantics We develop a baseline system as mentioned in Section 3 and Line 2 in Table 4 shows that it slightly improves the F1-measure by 0.9% over Li et al. (2012b) due to the incorporation of more refined features. This result indicates the limitation of syntactic-based feature engineering. Before evaluating our global argument inference model, we should identify the event relations between two mentions in a sentence, a discourse or a document. The experimental results show that the accuracies of identifying NC, EC, Parallel and Sequence relation are 80.0%, 72.4%, 88.5% and 87.7% respectively. Those results ensure that our simple methods are 1http://ictclas.org/ 2 http://code.google.com/p/berkeleyparser/ 3 http://nlp.stanford.edu/software/lex-parser.shtml 4 http://mallet.cs.umass.edu/ 5 http://lpsolve.sourceforge.net/5.5/ effective. Our statistics on the development set shows almost 65% of the event mentions are involved in those Correfrence, Parallel and Sequence relations, which occupy 63%, 50%, 9% respectively6. Most of the exceptions are isolated event mentions. System Argument identification Argument role determination P(%) R(%) F1 P(%) R(%) F1 Li et al.(2012b) 59.1 57.2 58.1 55.8 52.1 53.9 Baseline 60.5 57.6 59.0 55.7 53.0 54.4 BIM 59.3 60.1 59.7 54.4 55.2 54.8 BIM+RE 60.2 65.6 62.8 55.0 60.0 57.4 BIM+RE+AS 62.9 66.1 64.4 57.2 60.2 58.7 Table 4. Performance comparison of argument extraction on argument identification and role determination. Once the classifier AI and RD are trained, we would like to apply our global argument inference model to infer more inter-sentence arguments and roles. To achieve an optimal solution, we formulate the global inference problem as an Integer Linear Program (ILP), which leads to maximize the objective function. ILP is a mathematical method for constraintbased inference to find the optimal values for a set of variables that maximize an objective function in satisfying a certain number of constraints. In the literature, ILP has been widely used in many NLP applications (e.g., Barzilay and Lapata, 2006; Do et al., 2012; Li et al., 2012b). For our systems, we firstly evaluate the performance of our basic global argument inference model (BIM) with the Eq. 2–5 which enforce the consistency on AI and RD and then introduce the inference on the relevant event mentions (RE) and argument semantics (AS) to BIM. Table 4 shows their results and we can find out that: 1) BIM only slightly improves the performance in F1-measure, as the result of more increase in recall (R) than decrease in precision (P). This suggests that those constraints just enforcing the consistency on AI and RD is not effective enough to infer more arguments. 2) Compared to the BIM, our model BIM+RE enhances the performance of argument identification and role determination by 3.1% and 2.6% improvement in F1-measure respectively. This suggests the effectiveness 6 20% of the mentions belongs to both Coreference and Sequence relations. 1484 of our global argument inference model on the relevant event mentions to infer intersentence arguments. Table 5 shows the contributions of the different event relations while the Sequence relation gains the highest improvement of argument identification and role determination in F1-measure respectively. Constraint Argument identification Argument role determination P(%) R(%) F1 P(%) R(%) F1 BIM 59.3 60.1 59.7 54.4 55.2 54.8 +Parallel +0.6 +0.7 +0.6 +0.4 +0.6 +0.5 +NC +0.0 +0.8 +0.4 -0.2 +0.6 +0.2 +EC +0.6 +1.2 +0.9 +0.5 +1.0 +0.7 + Sequence -0.3 +2.8 +1.2 -0.2 +2.6 +1.1 Table 5. Contributions of different event relations on argument identification and role determination. (Incremental) 3) Our model BIM+ER+AS gains 1.6% improvement for argument identification, and 1.3% for role determination. The results ensure that argument semantics not only can improve the performance of argument identification, but also is helpful to assign a correct role to an argument in role determination. Table 3 shows 25.6% of trigger mentions introduced into argument extraction are pseudo ones. If we use the golden trigger extraction, our exploration shows that the precision and recall of argument identification can be up to 78.6% and 88.3% respectively. Table 6 shows the performance comparison of argument extraction on AI and RD given golden trigger extraction. Compared to the Baseline, our system improves the performance of argument identification and role determination by 6.4% and 5.8% improvement in F1-measure respectively, largely due to the dramatic increase in recall of 10.9% and 10.4%. System Argument identification Argument role determination P(%) R(%) F1 P(%) R(%) F1 Baseline 76.2 77.4 76.8 70.4 72.0 71.2 Model2 78.6 88.3 83.2 72.3 82.4 77.0 Table 6. Performance comparison of argument identification and type determination. (Golden trigger extraction) 5.3 Discussion The initiation of our paper is that syntactic features play an important role in current machine learning-based approaches for English event extraction, however, their effectiveness is much reduced in Chinese. So the improvement of our model for English event extraction is much less than that of Chinese. However, our model can be an effective complement of the sentencelevel English argument extraction systems since the performance of argument extraction is still low in English and using discourse-level information is a way to improve its performance, especially for those event mentions whose arguments spread in complex sentences. Moreover, our exploration shows that our global argument inference model can mine those arguments within a long distance which are unannotated as arguments of a special event mention in the corpus since the annotators just tagged arguments in a narrow scope or omitted a few arguments. Actually, they are the true ones to our knowledge and are more than 30.6% of those pseudo arguments inferred by our model. This ensures that our global argument inference model and those relations among event mentions is helpful to argument extraction. 6 Conclusion In this paper we propose a global argument inference model to extract those inter-sentence arguments due to the nature of Chinese that it is a discourse-driven pro-drop language with the wide spread of ellipsis and the open flexible sentence structure. In particular, we incorporate various kinds of event relations and the argument semantics into the model in the sentence, discourse and document layers which represent the cohesion of an event or a topic. The experimental results ensure that our global argument inference model outperforms the stateof-the-art system. In future work, we will focus on introducing more semantic information and cross-document information into the global argument inference model to improve the performance of argument extraction. Acknowledgments The authors would like to thank three anonymous reviewers for their comments on this paper. This research was supported by the National Natural Science Foundation of China under Grant No. 61070123, No. 61272260 and No. 61273320, the National 863 Project of China under Grant No. 2012AA011102. The co-author tagged with “*” is the corresponding author. 1485 References David Ahn. 2006. The Stages of Event Extraction. In Proc. COLING/ACL 2006 Workshop on Annotating and Reasoning about Time and Events. Pages 1-8, Sydney, Australia. Regina Barzilay and Miralla Lapata. 2006. Aggregation via Set Partitioning for Natural Language Generation. In Proc. NAACL 2006, pages 359-366, New York City, NY. Jonathan Berant, Ido Dagan and Jacob Goldberger. 2011. Global Learning of Typed Entailment Rules. In Proc. ACL 2011, pages 610-619, Portland, OR. Mary Elaine Califf and Raymond J. Mooney. 2003. Bottom-up Relational Learning of Pattern Matching rules for Information Extraction. Journal of Machine Learning Research, 4:177–210. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised Learning of Narrative Event Chains. In Proc. ACL 2008, pages 789-797, Columbus, OH. Nathanael Chambers and Dan Jurafsky. 2011. Template-based Information Extraction without the Templates. In Proc. ACL 2011, pages 976-986, Portland, OR. Zheng Chen and Heng Ji. 2009a. Can One Language Bootstrap the Other: A Case Study on Event Extraction. In Proc. NAACL/HLT 2009 Workshop on Semi-supervised Learning for Natural Language Processing, pages 66-74, Boulder, Colorado. Zheng Chen and Heng Ji. 2009b. Language Specific Issue and Feature Exploration in Chinese Event Extraction. In Proc. NAACL HLT 2009, pages 209-212, Boulder, Colorado. Zhengdong Dong and Qiang Dong. 2006. HowNet and the Computation of Meaning. World Scientific Pub Co. Inc. Quang Xuan Do, Wei Lu and Dan Roth. 2012. Joint Inference for Event Timeline Construction. In Proc. EMNLP 2012, pages 677-687, Jeju, Korea. Jianfeng Fu, Zongtian Liu, Zhaoman Zhong and Jianfang Shan. 2010. Chinese Event Extraction Based on Feature Weighting. Information Technology Journal, 9: 184-187. Ralph Grishman, David Westbrook and Adam Meyers. 2005. NYU’s English ACE 2005 System Description. In Proc. ACE 2005 Evaluation Workshop, Gaithersburg, MD. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou and Qiaoming Zhu. 2011. Using Cross-Entity Inference to Improve Event Extraction. In Proc. ACL 2011, pages 1127-1136, Portland, OR. Ruihong Huang and Ellen Riloff. 2011. Peeling Back the Layers: Detecting Event Role Fillers in Secondary Contexts, In Proc. ACL 2011, pages 1137-1147, Portland, OR. Ruihong Huang and Ellen Riloff. 2012. Modeling Textual Cohesion for Event Extraction. In Proc. AAAI 2012, pages 1664-1770, Toronto, Canada. Heng Ji and Ralph Grishman. 2008. Refining Event Extraction through Cross-Document Inference. In Proc. ACL 2008, pages 254-262, Columbus, OH. Fang Kong, Guodong Zhou, Longhua Qian and Qiaoming Zhu. 2010. Dependency-driven Anaphoricity Determination for Coreference Resolution. In Proc. COLING 2010, pages 599-607, Beijing, China. Junhui Li, Guodong Zhou and Hwee Tou Ng. 2010. Joint Syntactic and Semantic Parsing of Chinese. In Proc. ACL 2010, pages 1108-1117, Uppsala, Sweden. Peifeng Li, Guodong Zhou, Qiaoming Zhu and Libin Hou. 2012a. Employing Compositional Semantics and Discourse Consistency in Chinese Event Extraction. In Proc. EMNLP 2012, pages 10061016, Jeju, Korea. Peifeng Li, Qiaoming Zhu, Hongjun Diao and Guodong Zhou. 2012b. Joint Modeling of Trigger Identification and Event Type Determination in Chinese Event Extraction. In Proc. COLING 2012, pages 1635-1652, Mumbai, India. Peifeng Li and Guodong Zhou. 2012. Employing Morphological Structures and Sememes for Chinese Event Extraction. In Proc. COLING 2012, pages 1619-1634, Mumbai, India. Wenjie Li, Mingliu Wu, Qin Lu, Wei Xu and Chunfa Yuan. 2006. Extractive Summarization using Inter- and Intra- Event Relevance. In Proc. COLING/ACL 2006, pages 369-376, Sydney, Australia. Shasha Liao and Ralph Grishman. 2010. Using Document Level Cross-Event Inference to Improve Event Extraction. In Proc. ACL 2010, pages 789797, Uppsala, Sweden. Wei Lu and Dan Roth. 2012. Automatic Event Extraction with Structured Preference Modeling. In Proc. ACL 2012, pages 835-844, Jeju, Korea. Gideon Mann. 2007. Multi-document Relationship Fusion via Constraints on Probabilistic Databases. In Proc. HLT/NAACL 2007, pages 332-229, Rochester, NY. Siddharth Patwardhan and Ellen Riloff. 2007. Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions. In Proc. EMNLP/CoNLL 2007, pages 717-727, Prague, Czech Republic. Siddharth Patwardhan and Ellen Riloff. 2009. A Unified Model of Phrasal and Sentential Evidence 1486 for Information Extraction. In Proc. EMNLP 2009, pages 151-160, Singapore. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. In Proc. LREC 2008, pages 29612968, Marrakech, Morocco. Bing Qin, Yanyan Zhao, Xiao Ding, Ting Liu and Guofu Zhai. 2010. Event Type Recognition Based on Trigger Expansion. Tsinghua Science and Technology, 15(3): 251-258, Beijing, China. Ellen Riloff. 1996. Automatically Generating Extraction Patterns from Untagged Text. In Proc. AAAI 1996, pages 1044–1049, Portland, OR. Hongye Tan, Tiejun Zhao, Jiaheng Zheng. 2008. Identification of Chinese Event and Their Argument Roles. In Proc. 2008 IEEE International Conference on Computer and Information Technology Workshops, pages 14-19, Sydney, Australia. Nianwen Xue and Yaqin Yang. 2010. Chinese Sentence Segmentation as Comma Classification. In Proc. ACL 2010, pages 631-635, Uppsala, Sweden. Roman Yangarber, Clive Best, Peter von Etter, Flavio Fuart, David Horby and Ralf Steinberger. 2007. Combining Information about Epidemic Threats from Multiple Sources. In Proc. RANLP 2007 Workshop on Multi-source, Multilingual Information Extraction and Summarization, pages 41-48, Borovets, Bulgaria. 1487
2013
145
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1488–1497, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Fine-grained Semantic Typing of Emerging Entities Ndapandula Nakashole, Tomasz Tylenda, Gerhard Weikum Max Planck Institute for Informatics Saarbr¨ucken, Germany {nnakasho,ttylenda,weikum}@mpi-inf.mpg.de Abstract Methods for information extraction (IE) and knowledge base (KB) construction have been intensively studied. However, a largely under-explored case is tapping into highly dynamic sources like news streams and social media, where new entities are continuously emerging. In this paper, we present a method for discovering and semantically typing newly emerging out-ofKB entities, thus improving the freshness and recall of ontology-based IE and improving the precision and semantic rigor of open IE. Our method is based on a probabilistic model that feeds weights into integer linear programs that leverage type signatures of relational phrases and type correlation or disjointness constraints. Our experimental evaluation, based on crowdsourced user studies, show our method performing significantly better than prior work. 1 Introduction A large number of knowledge base (KB) construction projects have recently emerged. Prominent examples include Freebase (Bollacker 2008) which powers the Google Knowledge Graph, ConceptNet (Havasi 2007), YAGO (Suchanek 2007), and others. These KBs contain many millions of entities, organized in hundreds to hundred thousands of semantic classes, and hundred millions of relational facts between entities. However, despite these impressive advances, there are still major limitations regarding coverage and freshness. Most KB projects focus on entities that appear in Wikipedia (or other reference collections such as IMDB), and very few have tried to gather entities “in the long tail” beyond prominent sources. Virtually all projects miss out on newly emerging entities that appear only in the latest news or social media. For example, the Greenlandic singer Nive Nielsen has gained attention only recently and is not included in any KB (a former Wikipedia article was removed because it “does not indicate the importance or significance of the subject”), and the resignation of BBC director Entwistle is a recently new entity (of type event). Goal. Our goal in this paper is to discover emerging entities of this kind on the fly as they become noteworthy in news and social-media streams. A similar theme is pursued in research on open information extraction (open IE) (Banko 2007; Fader 2011; Talukdar 2010; Venetis 2011; Wu 2012), which yields higher recall compared to ontologystyle KB construction with canonicalized and semantically typed entities organized in prespecified classes. However, state-of-the-art open IE methods extract all noun phrases that are likely to denote entities. These phrases are not canonicalized, so the same entity may appear under many different names, e.g., “Mr. Entwistle”, “George Entwistle”, “the BBC director”, “BBC head Entwistle”, and so on. This is a problem because names and titles are ambiguous, and this hampers precise search and concise results. Our aim is for all recognized and newly discovered entities to be semantically interpretable by having fine-grained types that connect them to KB classes. The expectation is that this will boost the disambiguation of known entity names and the grouping of new entities, and will also strengthen the extraction of relational facts about entities. For informative knowledge, new entities must be typed in a fine-grained manner (e.g., guitar player, blues band, concert, as opposed to crude types like person, organization, event). Strictly speaking, the new entities that we cap1488 ture are typed noun phrases. We do not attempt any cross-document co-reference resolution, as this would hardly work with the long-tail nature and sparse observations of emerging entities. Therefore, our setting resembles the established task of fine-grained typing for noun phrases (Fleischmann 2002), with the difference being that we disregard common nouns and phrases for prominent in-KB entities and instead exclusively focus on the difficult case of phrases that likely denote new entities. The baselines to which we compare our method are state-of-the-art methods for nounphrase typing (Lin 2012; Yosef 2012). Contribution. The solution presented in this paper, called PEARL, leverages a repository of relational patterns that are organized in a typesignature taxonomy. More specifically, we harness the PATTY collection consisting of more than 300,000 typed paraphrases (Nakashole 2012). An example of PATTY’s expressive phrases is: ⟨musician⟩* cover * ⟨song⟩for a musician performing someone else’s song. When extracting noun phrases, PEARL also collects the cooccurring PATTY phrases. The type signatures of the relational phrases are cues for the type of the entity denoted by the noun phrase. For example, an entity named Snoop Dogg that frequently cooccurs with the ⟨singer⟩* distinctive voice in * ⟨song⟩pattern is likely to be a singer. Moreover, if one entity in a relational triple is in the KB and can be properly disambiguated (e.g., a singer), we can use a partially bound pattern to infer the type of the other entity (e.g., a song) with higher confidence. In this line of reasoning, we also leverage the common situation that many input sentences contain one entity registered in the KB and one novel or unknown entity. Known entities are recognized and mapped to the KB using a recent tool for named entity disambiguation (Hoffart 2011). For cleaning out false hypotheses among the type candidates for a new entity, we devised probabilistic models and an integer linear program that considers incompatibilities and correlations among entity types. In summary, our contribution in this paper is a model for discovering and ontologically typing out-of-KB entities, using a fine-grained type system and harnessing relational paraphrases with type signatures for probabilistic weight computation. Crowdsourced quality assessments demonstrate the accuracy of our model. 2 Detection of New Entities To detect noun phrases that potentially refer to entities, we apply a part-of-speech tagger to the input text. For a given noun phrase, there are four possibilities: a) The noun phrase refers to a general concept (a class or abstract concept), not an individual entity. b) The noun phrase is a known entity that can be directly mapped to the knowledge base. c) The noun phrase is a new name for a known entity. d) The noun phrase is a new entity not known to the knowledge base at all. In this paper, our focus is on case d); all other cases are out of the scope of this paper. We use an extensive dictionary of surface forms for in-KB entities (Hoffart 2012), to determine if a name or phrase refers to a known entity. If a phrase does not have any match in the dictionary, we assume that it refers to a new entity. To decide if a noun phrase is a true entity (i.e., an individual entity that is a member of one or more lexical classes) or a non-entity (i.e., a common noun phrase that denotes a class or a general concept), we base the decision on the following hypothesis (inspired by and generalizing (Bunescu 2006): A given noun phrase, not known to the knowledge base, is a true entity if its headword is singular and is consistently capitalized (i.e., always spelled with the first letter in upper case). 3 Typing Emerging Entities To deduce types for new entities we propose to align new entities along the type signatures of patterns they occur with. In this manner we use the patterns to suggest types for the entities they occur with. In particular, we infer entity types from pattern type signatures. Our approach builds on the following hypothesis: Hypothesis 3.1 (Type Alignment Hypothesis) For a given pattern such as ⟨actor⟩’s character in ⟨movie⟩, we assume that an entity pair (x, y) frequently occurring with the pattern in text implies that x and y are of the types ⟨actor⟩and ⟨movie⟩, respectively. Challenges and Objective. While the type alignment hypothesis works as a starting point, it introduces false positives. Such false positives stem 1489 from the challenges of polysemy, fuzzy pattern matches, and incorrect paths between entities. With polysemy, the same lexico-syntactic pattern can have different type signatures. For example, the following are three different patterns: ⟨singer⟩ released ⟨album⟩, ⟨music band⟩released ⟨album⟩, ⟨company⟩released ⟨product⟩. For an entity pair (x, y) occurring with the pattern “released”, x can be one of three different types. We cannot expect that the phrases we extract in text will be exact matches of the typed relational patterns learned by PATTY. Therefore, for better recall, we must accept fuzzy matches. Quite often however, the extracted phrase matches multiple relational patterns to various degrees. Each of the matched relational patterns has its own type signature. The type signatures of the various matched patterns can be incompatible with one another. The problem of incorrect paths between entities emerges when a pair of entities occurring in the same sentence do not stand in a true subject-object relation. Dependency parsing does not adequately solve the issue. Web sources contain a plethora of sentences that are not well-formed. Such sentences mislead the dependency parser to extract wrong dependencies. Our solution takes into account polysemy, fuzzy matches, as well as issues stemming from potential incorrect-path limitations. We define and solve the following optimization problem: Definition 1 (Type Inference Optimization) Given all the candidate types for x, find the best types or “strongly supported” types for x. The final solution must satisfy type disjointness constraints. Type disjointness constraints are constraints that indicate that, semantically, a pair of types cannot apply to the same entity at the same time. For example, a ⟨university⟩cannot be a ⟨person⟩. We also study a relaxation of type disjointness constraints through the use of type correlation constraints. Our task is therefore twofold: first, generate candidate types for new entities; second, find the best types for each new entity among its candidate types. 4 Candidate Types for Entities For a given entity, candidate types are types that can potentially be assigned to that entity, based on the entity’s co-occurrences with typed relational patterns. Definition 2 (Candidate Type) Given a new entity x which occurs with a number of patterns p1, p2, ..., pn, where each pattern pi has a type signature with a domain and a range: if x occurs on the left of pi, we pick the domain of pi as a candidate type for x; if x occurs on the right of pi, we pick the range of pi as a candidate type for x. For each candidate type, we compute confidence weights. Ideally, if an entity occurs with a pattern which is highly specific to a given type then the candidate type should have high confidence. For example “is married to” is more specific to people then “expelled from”. A person can be expelled from an organization but a country can also be expelled from an organization such as NATO. There are various ways to compute weights for candidate types. We first introduce a uniform weight approach and then present a method for computing more informative weights. 4.1 Uniform Weights We are given a new entity x which occurs with phrases (x phrase1 y1), (x phrase2 y2), ..., (x phrasen yn). Suppose these occurrences lead to the facts (x, p1, y1), (x, p2, y2),..., (x, pn, yn). The pis are the typed relational patterns extracted by PATTY. The facts are generated by matching phrases to relational patterns with type signatures. The type signature of a pattern is denoted by: sig(pi) = (domain(pi), range(pi)) We allow fuzzy matches, hence each fact comes with a match score. This is the similarity degree between the phrase observed in text and the typed relational pattern. Definition 3 (Fuzzy Match Score) Suppose we observe the surface string: (x phrase y) which leads to the fact: x, pi, y. The fuzzy match similarity score is: sim(phrase, pi), where similarity is the n-gram Jaccard similarity between the phrase and the typed pattern. The confidence that x is of type domain is defined as follows: Definition 4 (Candidate Type Confidence) For a given observation (x phrase y), where 1490 phrase matches patterns p1, ..., pn, with domains d1, ..., db which are possibly the same: typeConf(x, phrase, d) = X {pi:domain(pi)=d}  sim(phrase, pi)  Observe that this sums up over all patterns that match the phrase. To compute the final confidence for typeConf(x, domain), we aggregate the confidences over all phrases occurring with x. Definition 5 (Aggregate Confidence) For a set of observations (x, phrase1, y1), (x, phrase2, y2), ..., (x, phrasen, yn), the aggregate candidate type confidence is given by: aggTypeConf(x, d) = X phrasei typeConf(x, phrasei, d) = X phrasei X {pj:domain(pj)=d} (sim(phrasei, pj)) The confidence for the range typeConf(x, range) is computed analogously. All confidence weights are normalized to values in [0, 1]. The limitation of the uniform weight approach is that each pattern is considered equally good for suggesting candidate types. Thus this approach does not take into account the intuition that an entity occurring with a pattern which is highly specific to a given type is a stronger signal that the entity is of the type suggested. Our next approach addresses this limitation. 4.2 Co-occurrence Likelihood Weight Computation We devise a likelihood model for computing weights for entity candidate types. Central to this model is the estimation of the likelihood of a given type occurring with a given pattern. Suppose using PATTY methods we mined a typed relational pattern ⟨t1⟩p ⟨t2⟩. Suppose that we now encounter a new entity pair (x, y) occurring with a phrase that matches p. We can compute the likelihood of x and y being of types t1 and t2, respectively, from the likelihood of p cooccurring with entities of types t1, t2. Therefore we are interested in the type-pattern likelihood, defined as follows: Definition 6 (Type-Pattern Likelihood) The likelihood of p co-occurring with an entity pair (x, y) of the types (t1, t2) is given by: P[t1, t2|p] (1) where t1 and t2 are the types of the arguments observed with p from a corpus such as Wikipedia. P[t1, t2|p] is expanded as follows: P[t1, t2|p] = P[t1, t2, p] P[p] . (2) The expressions on the right-hand side of Equation 2 can be directly estimated from a corpus. We use Wikipedia (English), for corpus-based estimations. P[t1, t2, p] is the relative occurrence frequency of the typed pattern among all entitypattern-entity triples in a corpus (e.g., the fraction of ⟨musican⟩plays ⟨song⟩among all triples). P[p] is the relative occurrence frequency of the untyped pattern (e.g., plays) regardless of the argument types. For example, this sums up over both ⟨musican⟩plays ⟨song⟩occurrences and ⟨actor⟩ plays ⟨fictional character⟩. If we observe a fact where one argument name can be easily disambiguated to a knowledge-base entity so that its type is known, and the other argument is considered to be an out-of-knowledge-base entity, we condition the joint probability of t1, p, and t2 in a different way: Definition 7 (Conditional Type-PatternLikelihood) The likelihood of an entity of type t1 occurring with a pattern p and an entity of type t2 is given by: P[t1|t2, p] = P[t1, t2, p] P[p, t2] (3) where the P[p, t2] is the relative occurrence frequency of a partial triple, for example, ⟨*⟩plays ⟨song⟩. Observe that all numbers refer to occurrence frequencies. For example, P[t1, p, t2] is a fraction of the total number of triples in a corpus. Multiple patterns can suggest the same type for an entity. Therefore, the weight of the assertion that y is of type t, is the total support strength from all phrases that suggest type t for y. Definition 8 (Aggregate Likelihood) The aggregate likelihood candidate type confidence is given 1491 by: typeConf(x, domain)) = X phrasei X pj  sim(phrasei, pj) ∗Υ  Where Υ = P[t1, t2|p] or P[t1|t2, p] or P[t2|t1, p] The confidence weights are normalized to values in [0, 1]. So far we have presented a way of generating a number of weighted candidate types for x. In the next step we pick the best types for an entity among all its candidate types. 4.3 Integer Linear Program Formulation Given a set of weighted candidate types, our goal is to pick a compatible subset of types for x. The additional asset that we leverage here is the compatibility of types: how likely is it that an entity belongs to both type ti and type tj. Some types are mutually exclusive, for example, the type location rules out person and, at finer levels, city rules out river and building, and so on. Our approach harnesses these kinds of constraints. Our solution is formalized as an Integer Linear Program (ILP). We have candidate types for x: t1, .., tn. First, we define a decision variable Ti for each candidate type i = 1, . . . , n. These are binary variables: Ti = 1 means type ti is selected to be included in the set of types for x, Ti = 0 means we discard type ti for x. In the following we develop two variants of this approach: a “hard” ILP with rigorous disjointness constraints, and a “soft” ILP which considers type correlations. “Hard” ILP with Type Disjointness Constraints. We infer type disjointness constraints from the YAGO2 knowledge base using occurrence statistics. Types with no overlap in entities or insignificant overlap below a specified threshold are considered disjoint. Notice that this introduces hard constraints whereby selecting one type of a disjoint pair rules out the second type. We define type disjointness constraints Ti + Tj ≤1 for all disjoint pairs ti, tj (e.g. person-artifact, moviebook, city-country, etc.). The ILP is defined as follows: objective max P i Ti × wi type disjointness constraint ∀(ti, tj)disjoint Ti + Tj ≤1 The weights wi are the aggregrated likelihoods as specified in Definition 8. “Soft” ILP with Type Correlations. In many cases, two types are not really mutually exclusive in the strict sense, but the likelihood that an entity belongs to both types is very low. For example, few drummers are also singers. Conversely, certain type combinations are boosted if they are strongly correlated. An example is guitar players and electric guitar players. Our second ILP considers such soft constraints. To this end, we precompute Pearson correlation coefficients for all type pairs (ti, tj) based on co-occurrences of types for the same entities. These values vij ∈[−1, 1] are used as weights in the objective function of the ILP. We additionally introduce pair-wise decision variables Yij, set to 1 if the entity at hand belongs to both types ti and tj, and 0 otherwise. This coupling between the Yij variables and the Ti, Tj variables is enforced by specific constraints. For the objective function, we choose a linear combination of per-type evidence, using weights wi as before, and the type-compatibility measure, using weights vij. The ILP with correlations is defined as follows: objective max α P i Ti × wi + (1 −α) P ij Yij × vij type correlation constraints ∀i,j Yij + 1 ≥Ti + Tj ∀i,j Yij ≤Ti ∀i,j Yij ≤Tj Note that both ILP variants need to be solved per entity, not over all entities together. The “soft” ILP has a size quadratic in the number of candidate types, but this is still a tractable input for modern solvers. We use the Gurobi software package to compute the solutions for the ILP’s. With this design, PEARL can efficiently handle a typical news article in less than a second, and is well geared for keeping up with high-rate content streams in real time. For both the “hard” and “soft” variants of the ILP, the solution is the best types for entity x satisfying the constraints. 1492 5 Evaluation To define a suitable corpus of test data, we obtained a stream of news documents by subscribing to Google News RSS feeds for a few topics over a six-month period (April 2012 – September 2012). This produced 318, 434 documents. The topics we subscribed to are: Angela Merkel, Barack Obama, Business, Entertainment, Hillary Clinton, Joe Biden, Mitt Romney, Newt Gingrich, Rick Santorum, SciTech and Top News. All our experiments were carried out on this data. The type system used is that of YAGO2, which is derived from WordNet. Human evaluations were carried out on Amazon Mechanical Turk (MTurk), which is a platform for crowd-sourcing tasks that require human input. Tasks on MTurk are small questionnaires consisting of a description and a set of questions. Baselines. We compared PEARL against two state-of-the-art baselines: i). NNPLB (No Noun Phrase Left Behind), is the method presented in (Lin 2012), based on the propagation of types for known entities through salient patterns occurring with both known and unknown entities. We implemented the algorithm in (Lin 2012) in our framework, using the relational patterns of PATTY (Nakashole 2012) for comparability. For assessment we sampled from the top-5 highest ranked types for each entity. In our experiments, our implementation of NNPLB achieved precision values comparable to those reported in (Lin 2012). ii). HYENA (Hierarchical tYpe classification for Entity NAmes), the method of (Yosef 2012), based on a feature-rich classifier for fine-grained, hierarchical type tagging. This is a state-of-the-art representative of similar methods such as (Rahman 2010; Ling 2012). Evaluation Task. To evaluate the quality of types assigned to emerging entities, we presented turkers with sentences from the news tagged with outof-KB entities and the types inferred by the methods under test. The turkers task was to assess the correctness of types assigned to an entity mention. To make it easy to understand the task for the turkers, we combined the extracted entity and type into a sentence. For example if PEARL inferred that Brussels Summit is an political event, we generate and present the sentence: Brussels Summit is an event. We allowed four possible assessment values: a) Very good output corresponds to a perfect result. b) Good output exhibits minor errors. For instance, the description G20 Summit is an organization is wrong, because the summit is an event, but G20 is indeed an organization. The problem in this example is incorrect segmentation of a named entity. c) Wrong for incorrect types (e.g., Brussels Summit is a politician). d) Not sure / do not know for other cases. Comparing PEARL to Baselines. Per method, turkers evaluated 105 entity-type pair test samples. We first sampled among out-of-KB entities that were mentioned frequently in the news corpus: in at least 20 different news articles. Each test sample was given to 3 different turkers for assessment. Since the turkers did not always agree if the type for a sample is good or not, we aggregate their answers. We use voting to decide whether the type was assigned correctly to an entity. We consider the following voting variants: i) majority “very good” or “good”, a conservative notion of precision: precisionlower. ii) at least one “very good” or “good”, a liberal notion of precision: precisionupper. Table 1 shows precision for PEARL-hard, PEARL-soft, NNPLB, and HYENA, with a 0.9-confidence Wilson score interval (Brown 2001). PEARL-hard outperformed PEARL-soft and also both baselines. HYENA’s relatively poor performance can be attributed to the fact that its features are mainly syntactic such as bi-grams and part-of-speech tags. Web data is challenging, it has a lot of variations in syntactic formulations. This introduces a fair amount of ambiguity which can easily mislead syntactic features. Leveraging semantic features as done by PEARL could improve HYENA’s performance. While the NNPLB method performs better than HYENA, in comparison to PEARL-hard, there is room for improvement. Like HYENA, NNPLB assigns negatively correlated types to the same entity. This limitation could be addressed by applying PEARL’s ILPs and probabilistic weights to the candidate types suggested by NNPLB. To compute inter-judge agreement we calculated Fleiss’ kappa and Cohen’s kappa κ, which are standard measures. The usual assumption for Fleiss’κ is that labels are categorical, so that each disagreement counts the same. This is not the case in our settings, where different labels may indicate partial agreement (“good”, “very good”). There1493 Precisionlower Precisionupper PEARL-hard 0.77±0.08 0.88±0.06 PEARL-soft 0.53±0.09 0.77±0.09 HYENA 0.26±0.08 0.56±0.09 NNPLB 0.46±0.09 0.68±0.09 Table 1: Comparison of PEARL to baselines. κ Fleiss Cohen 0.34 0.45 Table 2: Lower bound estimations for inter-judge agreement kappa: Fleiss’ κ & adapted Cohen’s κ. fore the κ values in Table 2 are lower-bound estimates of agreement in our experiments; the “true agreement” seems higher. Nevertheless, the observed Fleiss κ values show that the task was fairly clear to the turkers; values > 0.2 are generally considered as acceptable (Landis 1977). Cohen’s κ is also not directly applicable to our setting. We approximated it by finding pairs of judges who assessed a significant number of the same entity-type pairs. Precisionlower Precisionupper Freq. mentions 0.77±0.08 0.88±0.06 All mentions 0.65±0.09 0.77±0.08 Table 3: PEARL-hard performance on a sample of frequent entities (mention frequency≥20) and on a sample of entities of all mention frequencies. Mention Frequencies. We also studied PEARLhard’s performance on entities of different mention frequencies. The results are shown in Table 3. Frequently mentioned entities provide PEARL with more evidence as they potentially occur with more patterns. Therefore, as expected, precision when sampling over all entities drops a bit. For such infrequent entities, PEARL does not have enough evidence for reliable type assignments. Variations of PEARL. To quantify how various aspects of our approach affect performance, we studied a few variations. The first method is the full PEARL-hard. The second method is PEARL with no ILP (denoted No ILP), only using the probabilistic model. The third variation is PEARL without probabilistic weights (denoted Uniform Figure 1: Variations of the PEARL method. Weights). From Figure 1, it is clear that both the ILP and the weighting model contribute significantly to PEARL’s ability to make precise type assignments. Sample results from PEARL-hard are shown in Table 4. NDCG. For a given entity mention e, an entitytyping system returns a ranked list of types {t1, t2, ..., tn}. We evaluated ranking quality using the top-5 ranks for each method. These assessments were aggregated into the normalized discounted cumulative gain (NDCG), a widely used measure for ranking quality. The NDCG values obtained are 0.53, 0.16, and 0.16, for PEARLhard, HYENA, and NNPLB, respectively. PEARL clearly outperforms the baselines on ranking quality, too. 6 Related Work Tagging mentions of named entities with lexical types has been pursued in previous work. Most well-known is the Stanford named entity recognition (NER) tagger (Finkel 2005) which assigns coarse-grained types like person, organization, location, and other to noun phrases that are likely to denote entities. There is fairly little work on finegrained typing, notable results being (Fleischmann 2002; Rahman 2010; Ling 2012; Yosef 2012). These methods consider type taxonomies similar to the one used for PEARL, consisting of several hundreds of fine-grained types. All methods use trained classifiers over a variety of linguistic features, most importantly, words and bigrams with part-of-speech tags in a mention and in the textual context preceding and following the mention. In addition, the method of (Yosef 2012) (HYENA) utilizes a big gazetteer of per-type words that occur in Wikipedia anchor texts. This method outperforms earlier techniques on a variety of test 1494 Entity Inferred Type Sample Source Sentence (s) Lochte medalist Lochte won America’s lone gold ... Malick director ... the red carpet in Cannes for Malick’s 2011 movie ... Bonamassa musician Bonamassa recorded Driving Towards the Daylight in Las Vegas ... ... Bonamassa opened for B.B. King in Rochester , N.Y. Analog Man album Analog Man is Joe Walsh’s first solo album in 20 years. Melinda Liu journalist ... in a telephone interview with journalist Melinda Liu of the Daily Beast. RealtyTrac publication Earlier this month, RealtyTrac reported that ... Table 4: Sample types inferred by PEARL. cases; hence it served as one of our baselines. Closely related to our work is the recent approach of (Lin 2012) (NNPLB) for predicting types for out-of-KB entities. Noun phrases in the subject role in a large collection of fact triples are heuristically linked to Freebase entities. This yields type information for the linked mentions. For unlinkable entities the NNPLB method (inspired by (Kozareva 2011)) picks types based on co-occurrence with salient relational patterns by propagating types of linked entities to unlinkable entities that occur with the same patterns. Unlike PEARL, NNPLB does not attempt to resolve inconsistencies among the predicted types. In contrast, PEARL uses an ILP with type disjointness and correlation constraints to solve and penalize such inconsistencies. NNPLB uses untyped patterns, whereas PEARL harnesses patterns with type signatures. Furthermore, PEARL computes weights for candidate types based on patterns and type signatures. Weight computations in NNPLB are only based on patterns. NNPLB only assigns types to entities that appear in the subject role of a pattern. This means that entities in the object role are not typed at all. In contrast, PEARL infers types for entities in both the subject and object role. Type disjointness constraints have been studied for other tasks in information extraction (Carlson 2010; Suchanek 2009), but using different formulations. 7 Conclusion This paper addressed the problem of detecting and semantically typing newly emerging entities, to support the life-cycle of large knowledge bases. Our solution, PEARL, draws on a collection of semantically typed patterns for binary relations. PEARL feeds probabilistic evidence derived from occurrences of such patterns into two kinds of ILPs, considering type disjointness or type correlations. This leads to highly accurate type predictions, significantly better than previous methods, as our crowdsourcing-based evaluation showed. References S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, Z.G. Ives: DBpedia: A Nucleus for a Web of Open Data. In Proceedings of the 6th International Semantic Web Conference (ISWC), pages 722–735, Busan, Korea, 2007. M. Banko, M. J. Cafarella, S. Soderland, M. Broadhead, O. Etzioni: Open Information Extraction from the Web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), pages 2670–2676, Hyderabad, India, 2007. K. D. Bollacker, C. Evans, P. Paritosh, T. Sturge, J. Taylor: Freebase: a Collaboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), pages, 1247-1250, Vancouver, BC, Canada, 2008. Lawrence D. Brown, T.Tony Cai, Anirban Dasgupta: Interval Estimation for a Binomial Proportion. Statistical Science 16: pages 101–133, 2001. R. C. Bunescu, M. Pasca: Using Encyclopedic Knowledge for Named entity Disambiguation. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Trento, Italy, 2006. A. Carlson, J. Betteridge, R.C. Wang, E.R. Hruschka, T.M. Mitchell: Coupled Semi-supervised Learning for Information Extraction. In Proceedings of the Third International Conference on Web Search and Web Data Mining (WSDM), pages 101–110, New York, NY, USA, 2010. S. Cucerzan: Large-Scale Named Entity Disambiguation Based on Wikipedia Data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP1495 CoNLL), pages 708–716, Prague, Czech Republic, 2007. A. Fader, S. Soderland, O. Etzioni: Identifying Relations for Open Information Extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1535–1545, Edinburgh, UK, 2011. J.R. Finkel, T. Grenager, C. Manning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 363–370, Ann Arbor, Michigan, 2005. Michael Fleischman, Eduard H. Hovy: Fine Grained Classification of Named Entities. In Proceedings the International Conference on Computational Linguistics, COLING 2002. X. Han, J. Zhao: Named Entity Disambiguation by Leveraging Wikipedia Semantic Knowledge. In Proceedings of 18th ACM Conference on Information and Knowledge Management (CIKM), pages 215 – 224,Hong Kong, China, 2009. C. Havasi, R. Speer, J. Alonso. ConceptNet 3: a Flexible, Multilingual Semantic Network for Common Sense Knowledge. In Proceedings of the Recent Advances in Natural Language Processing (RANLP), Borovets, Bulgaria, 2007. Sebastian Hellmann, Claus Stadler, Jens Lehmann, Sren Auer: DBpedia Live Extraction. OTM Conferences (2) 2009: 1209-1223. J. Hoffart, M. A. Yosef, I.Bordino and H. Fuerstenau, M. Pinkal, M. Spaniol, B.Taneva, S.Thater, Gerhard Weikum: Robust Disambiguation of Named Entities in Text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 782–792, Edinburgh, UK, 2011. J. Hoffart, F. Suchanek, K. Berberich, E. LewisKelham, G. de Melo, G. Weikum: YAGO2: Exploring and Querying World Knowledge in Time, Space, Context, and Many Languages. In Proceedings of the 20th International Conference on World Wide Web (WWW), pages 229–232, Hyderabad, India. 2011. J. Hoffart, F. Suchanek, K. Berberich, G. Weikum: YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia. Artificial Intelligence 2012. Z. Kozareva, L. Voevodski, S.-H.Teng: Class Label Enhancement via Related Instances. EMNLP 2011: 118-128 J. R. Landis, G. G. Koch: The measurement of observer agreement for categorical data in Biometrics. Vol. 33, pp. 159174, 1977. C. Lee, Y-G. Hwang, M.-G. Jang: Fine-grained Named Entity Recognition and Relation Extraction for Question Answering. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 799–800, Amsterdam, The Netherlands, 2007. T. Lin, Mausam , O. Etzioni: No Noun Phrase Left Behind: Detecting and Typing Unlinkable Entities. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 893–903, Jeju, South Korea, 2012. Xiao Ling, Daniel S. Weld: Fine-Grained Entity Recognition. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2012 D. N. Milne, I. H. Witten: Learning to Link with Wikipedia. In Proceedings of 17th ACM Conference on Information and Knowledge Management (CIKM), pages 509-518, Napa Valley, California, USA, 2008. N. Nakashole, G. Weikum, F. Suchanek: PATTY: A Taxonomy of Relational Patterns with Semantic Types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1135 1145, Jeju, South Korea, 2012. V. Nastase, M. Strube, B. Boerschinger, C¨acilia Zirn, Anas Elghafari: WikiNet: A Very Large Scale Multi-Lingual Concept Network. In Proceedings of the 7th International Conference on Language Resources and Evaluation(LREC), Malta, 2010. H. T. Nguyen, T. H. Cao: Named Entity Disambiguation on an Ontology Enriched by Wikipedia. In Proceedings of the IEEE International Conference on Research, Innovation and Vision for the Future in Computing & Communication Technologies (RIVF), pages 247–254, Ho Chi Minh City, Vietnam, 2008. Feng Niu, Ce Zhang, Christopher Re, Jude W. Shavlik: DeepDive: Web-scale Knowledge-base Construction using Statistical Learning and Inference. In the VLDS Workshop, pages 25-28, 2012. A. Rahman, Vincent Ng: Inducing Fine-Grained Semantic Classes via Hierarchical and Collective Classification. In Proceedings the International Conference on Computational Linguistics (COLING), pages 931-939, 2010. F. M. Suchanek, G. Kasneci, G. Weikum: Yago: a Core of Semantic Knowledge. In Proceedings of the 16th International Conference on World Wide Web (WWW) pages, 697-706, Banff, Alberta, Canada, 2007. 1496 F. M. Suchanek, M. Sozio, G. Weikum: SOFIE: A Self-organizing Framework for Information Extraction. InProceedings of the 18th International Conference on World Wide Web (WWW), pages 631–640, Madrid, Spain, 2009. P.P. Talukdar, F. Pereira: Experiments in Graph-Based Semi-Supervised Learning Methods for ClassInstance Acquisition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 1473-1481, 2010. P. Venetis, A. Halevy, J. Madhavan, M. Pasca, W. Shen, F. Wu, G. Miao, C. Wu: Recovering Semantics of Tables on the Web. In Proceedings of the VLDB Endowment, PVLDB 4(9), pages, 528–538. 2011. W. Wu, H. Li, H. Wang, K. Zhu: Probase: A Probabilistic Taxonomy for Text Understanding. In Proceedings of the International Conference on Management of Data (SIGMOD), pages 481–492, Scottsdale, AZ, USA, 2012. M. A. Yosef, S. Bauer, J. Hoffart, M. Spaniol, G. Weikum: HYENA: Hierarchical Type Classification for Entity Names. In Proceedings the International Conference on Computational Linguistics(COLING), to appear, 2012. 1497
2013
146
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1498–1507, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Embedding Semantic Similarity in Tree Kernels for Domain Adaptation of Relation Extraction Barbara Plank∗ Center for Language Technology University of Copenhagen, Denmark [email protected] Alessandro Moschitti QCRI - Qatar Foundation & DISI - University of Trento, Italy [email protected] Abstract Relation Extraction (RE) is the task of extracting semantic relationships between entities in text. Recent studies on relation extraction are mostly supervised. The clear drawback of supervised methods is the need of training data: labeled data is expensive to obtain, and there is often a mismatch between the training data and the data the system will be applied to. This is the problem of domain adaptation. In this paper, we propose to combine (i) term generalization approaches such as word clustering and latent semantic analysis (LSA) and (ii) structured kernels to improve the adaptability of relation extractors to new text genres/domains. The empirical evaluation on ACE 2005 domains shows that a suitable combination of syntax and lexical generalization is very promising for domain adaptation. 1 Introduction Relation extraction is the task of extracting semantic relationships between entities in text, e.g. to detect an employment relationship between the person Larry Page and the company Google in the following text snippet: Google CEO Larry Page holds a press announcement at its headquarters in New York on May 21, 2012. Recent studies on relation extraction have shown that supervised approaches based on either feature or kernel methods achieve state-of-the-art accuracy (Zelenko et al., 2002; Culotta and Sorensen, 2004; ∗The first author was affiliated with the Department of Computer Science and Information Engineering of the University of Trento (Povo, Italy) during the design of the models, experiments and writing of the paper. Zhang et al., 2005; Zhou et al., 2005; Zhang et al., 2006; Bunescu, 2007; Nguyen et al., 2009; Chan and Roth, 2010; Sun et al., 2011). However, the clear drawback of supervised methods is the need of training data, which can slow down the delivery of commercial applications in new domains: labeled data is expensive to obtain, and there is often a mismatch between the training data and the data the system will be applied to. Approaches that can cope with domain changes are essential. This is the problem of domain adaptation (DA) or transfer learning (TL). Technically, domain adaptation addresses the problem of learning when the assumption of independent and identically distributed (i.i.d.) samples is violated. Domain adaptation has been studied extensively during the last couple of years for various NLP tasks, e.g. two shared tasks have been organized on domain adaptation for dependency parsing (Nivre et al., 2007; Petrov and McDonald, 2012). Results were mixed, thus it is still a very active research area. However, to the best of our knowledge, there is almost no work on adapting relation extraction (RE) systems to new domains.1 There are some prior studies on the related tasks of multi-task transfer learning (Xu et al., 2008; Jiang, 2009) and distant supervision (Mintz et al., 2009), which are clearly related but different: the former is the problem of how to transfer knowledge from old to new relation types, while distant supervision tries to learn new relations from unlabeled text by exploiting weak-supervision in the form of a knowledge resource (e.g. Freebase). We assume the same relation types but a shift in the underlying 1Besides an unpublished manuscript of a student project, but it is not clear what data was used. http://tinyurl.com/ bn2hdwk 1498 data distribution. Weak supervision is a promising approach to improve a relation extraction system, especially to increase its coverage in terms of types of relations covered. In this paper we examine the related issue of changes in the underlying data distribution, while keeping the relations fixed. Even a weakly supervised system is expected to perform well when applied to any kind of text (other domain/genre), thus ideally, we believe that combining domain adaptation with weak supervision is the way to go in the future. This study is a first step towards this. We focus on unsupervised domain adaptation, i.e. no labeled target data. Moreover, we consider a particular domain adaptation setting: singlesystem DA, i.e. learning a single system able to cope with different but related domains. Most studies on DA so far have focused on building a specialized system for every specific target domain, e.g. Blitzer et al. (2006). In contrast, the goal here is to build a single system that can robustly handle several domains, which is in line with the setup of the recent shared task on parsing the web (Petrov and McDonald, 2012). Participants were asked to build a single system that can robustly parse all domains (reviews, weblogs, answers, emails, newsgroups), rather than to build several domain-specific systems. We consider this as a shift in what was considered domain adaptation in the past (adapt from source to a specific target) and what can be considered a somewhat different recent view of DA, that became widespread since 2011/2012. The latter assumes that the target domain(s) is/are not really known in advance. In this setup, the domain adaptation problem boils down to finding a more robust system (Søgaard and Johannsen, 2012), i.e. one wants to build a system that can robustly handle any kind of data. We propose to combine (i) term generalization approaches and (ii) structured kernels to improve the performance of a relation extractor on new domains. Previous studies have shown that lexical and syntactic features are both very important (Zhang et al., 2006). We combine structural features with lexical information generalized by clusters or similarity. Given the complexity of feature engineering, we exploit kernel methods (ShaweTaylor and Cristianini, 2004). We encode word clusters or similarity in tree kernels, which, in turn, produce spaces of tree fragments. For example, “president”, “vice-president” and “Texas”, “US”, are terms indicating an employment relation between a person and a location. Rather than only matching the surface string of words, lexical similarity enables soft matches between similar words in convolution tree kernels. In the empirical evaluation on Automatic Content Extraction (ACE) data, we evaluate the impact of convolution tree kernels embedding lexical semantic similarities. The latter is derived in two ways with: (a) Brown word clustering (Brown et al., 1992); and (b) Latent Semantic Analysis (LSA). We first show that our system aligns well with the state of the art on the ACE 2004 benchmark. Then, we test our RE system on the ACE 2005 data, which exploits kernels, structures and similarities for domain adaptation. The results show that combining the huge space of tree fragments generalized at the lexical level provides an effective model for adapting RE systems to new domains. 2 Semantic Syntactic Tree Kernels In kernel-based methods, both learning and classification only depend on the inner product between instances. Kernel functions can be efficiently and implicitly computed by exploiting the dual formulation: P i=1..l yiαiφ(oi)φ(o) + b = 0, where oi and o are two objects, φ is a mapping from an object to a feature vector ⃗xi and φ(oi)φ(o) = K(oi, o) is a kernel function implicitly defining such a mapping. In case of structural kernels, K determines the shape of the substructures describing the objects. Commonly used kernels in NLP are string kernels (Lodhi et al., 2002) and tree kernels (Moschitti, 2006; Moschitti, 2008). NP PP NP E2 NNP Texas IN from NP E1 NNP governor → NP PP NP NP PP NP E1 NP PP NP E1 NNP governor E1 NNP governor . . . NNP Texas Figure 1: Syntactic tree kernel (STK). Syntactic tree kernels (Collins and Duffy, 2001) compute the similarity between two trees T1 and T2 by counting common sub-trees (cf. Figure 1), without enumerating the whole fragment space. However, if two trees have similar substructures that employ different though related terminal nodes, they will not be matched. This is 1499 clearly a limitation. For instance, the fragments corresponding to governor from Texas and head of Maryland are intuitively semantically related and should obtain a higher match when compared to mother of them. Semantic syntactic tree kernels (Bloehdorn and Moschitti, 2007a; Bloehdorn and Moschitti, 2007b; Croce et al., 2011) provide one way to address this problem by introducing similarity σ that allows soft matches between words and, consequently, between fragments containing them. Let N1 and N2 be the set of nodes in T1 and T2, respectively. Moreover, let Ii(n) be an indicator variable that is 1 if subtree i is rooted at n and 0 otherwise. The syntactic semantic convolution kernel TKσ (Bloehdorn and Moschitti, 2007b) over T1 and T2 is computed as TKσ(T1, T2) = P n1∈N1,n2∈N2 ∆σ(n1, n2) where ∆σ(n1, n2) = P n1∈N1 P n2∈N2 P i Ii(n1)Ii(n2) is computed efficiently using the following recursive definition: i) If the nodes n1 and n2 are either different or have different number of children then ∆σ(n1, n2) = 0; else ii) If n1 and n2 are pre-terminals then ∆σ(n1, n2) = λ Qnc(n1) j=1 ∆σ(ch(n1, j), ch(n2, j)), where σ measures the similarity between the corresponding children of n1 and n2; iii) If n1 and n2 have identical children: ∆σ(n1, n2) = λ Qnc(n1) j=1 (1 + ∆σ(ch(n1, j)), ch(n2, j)); else ∆σ(n1, n2) = 0. TKσ combines generalized lexical with structural information: it allows matching tree fragments that have the same syntactic structure but differ in their terminals. After introducing related work, we will discuss computational structures for RE and their extension with semantic similarity. 3 Related Work Semantic syntactic tree kernels have been previously used for question classification (Bloehdorn and Moschitti, 2007a; Bloehdorn and Moschitti, 2007b; Croce et al., 2011). These kernels have not yet been studied for either domain adaptation or RE. Brown clusters were studied previously for feature-based approaches to RE (Sun et al., 2011; Chan and Roth, 2010), but they were not yet evaluated in kernels. Thus, we present a novel application of semantic syntactic tree kernels and Brown clusters for domain adaptation of tree-kernel based relation extraction. Regarding domain adaptation, several methods have been proposed, ranging from instance weighting (Jiang and Zhai, 2007) to approaches that change the feature representation (Daum´e III, 2007) or try to exploit pivot features to find a generalized shared representation between domains (Blitzer et al., 2006). The easy-adapt approach presented in Daum´e III (2007) assumes the supervised adaptation setting and is thus not applicable here. Structural correspondence learning (Blitzer et al., 2006) exploits unlabeled data from both source and target domain to find correspondences among features from different domains. These correspondences are then integrated as new features in the labeled data of the source domain. The key to SCL is to exploit pivot features to automatically identify feature correspondences, and as such is applicable to feature-based approaches but not in our case since we do not assume availability of target domain data. Instead, we apply a similar idea where we exploit an entire unlabeled corpus as pivot, and compare our approach to instance weighting (Jiang and Zhai, 2007). Instance weighting is a method for domain adaptation in which instance-dependent weights are assigned to the loss function that is minimized during the training process. Let l(x, y, θ) be some loss function. Then, as shown in Jiang and Zhai (2007), the loss function can be weighted by βil(x, y, θ), such that βi = Pt(xi) Ps(xi), where Ps and Pt are the source and target distributions, respectively. Huang et al. (2007) present an application of instance weighting to support vector machines by minimizing the following re-weighted function: minθ,ξ 1 2||θ||2 + C Pm i=1 βiξi. Finding a good weight function is non-trivial (Jiang and Zhai, 2007) and several approximations have been evaluated in the past, e.g. Søgaard and Haulrich (2011) use a bigram-based text classifier to discriminate between domains. We will use a binary classifier trained on RE instance representations. 4 Computational Structures for RE A common way to represent a constituency-based relation instance is the PET (path-enclosed-tree), the smallest subtree including the two target entities (Zhang et al., 2006). This is basically the former structure PAF2 (predicate argument feature) defined in Moschitti (2004) for the extraction of predicate argument relations. The syntactic rep2It is the smallest subtree enclosing the predicate and one of its argument node. 1500 resentation used by Zhang et al. (2006) (we will refer to it as PET Zhang) is the PET with enriched entity information: e.g. E1-NAM-PER, including entity type (PER, GPE, LOC, ORG) and mention type (NAM, NOM, PRO, PRE: name, nominal, pronominal or premodifier). An alternative kernel that does not use syntactic information is the Bag-of-Words (BOW) kernel, where a single root node is added above the terminals. Note that in this BOW kernel we actually mark target entities with E1/E2. Therefore, our BOW kernel can be considered an enriched BOW model. If we do not mark target entities, performance drops considerably, as discussed later. As shown by Zhang et al. (2006), including gold-standard information on entity and mention type substantially improves relation extraction performance. We will use this gold information also in Section 6.1 to show that our system aligns well to the state of the art on the ACE 2004 benchmark. However, in a realistic setting this information is not available or noisy. In fact, as we discuss later, excluding gold entity information decreases system performance considerably. In the case of porting a system to new domains entity information will be unreliable or missing. Therefore, in our domain adaptation experiments on the ACE 2005 data (Section 6.3) we will not rely on this gold information but rather train a system using PET (target mentions only marked with E1/E2 and no gold entity label).3 4.1 Syntactic Semantic Structures Combining syntax with semantics has a clear advantage: it generalizes lexical information encapsulated in syntactic parse trees, while at the same time syntax guides semantics in order to obtain an effective semantic similarity. In fact, lexical information is highly affected by data-sparseness, thus tree kernels combined with semantic information created from additional resources should provide a way to obtain a more robust system. We exploit this idea here for domain adaptation (DA): if words are generalized by semantic similarity LS, then in a hypothetical world changing LS such that it reflects the target domain would 3In a setup where gold label info is included, the impact of similarity-based methods is limited – gold information seems to predominate. We argue that whenever gold data is not available, distributional semantics paired with kernels can be useful to improve generalization and complement missing gold info. allow the system to perform better in the target domain. The question remains how to establish a link between the semantic similarity in the source and target domain. We propose to use an entire unlabeled corpus as pivot: this corpus must be general enough to encapsulate the source and target domains of interest. The idea is to (i) learn semantic similarity between words on the pivot corpus and (ii) use tree kernels embedding such a similarity to learn a RE system on the source, which allows to generalize to the new target domain. This reasoning is related to Structural Correspondence Learning (SCL) (Blitzer et al., 2006). In SCL, a representation shared across domains is learned by exploiting pivot features, where a set of pivot features has to be selected (usually a few thousands). In our case pivots are words that cooccur with the target words in a large unlabeled corpus and are thus implicitly represented in the similarity matrix. Thus, in contrast to SCL, we do not need to select a set of pivot features but rather rely on the distributional hypothesis to infer a semantic similarity from a large unlabeled corpus. Then, this similarity is incorporated into the tree kernel that provides the necessary restriction for an effective semantic similarity calculation. One peculiarity of our work is that we exploit a large amount of general data, i.e. data gathered from the web, which is a different but also more challenging scenario than the general unsupervised DA setting where domain specific data is available. We study two ways for term generalization in tree kernels: Brown words clusters and Latent Semantic Analysis (LSA), both briefly described next. a) replace pos NP PP NP E2 1111100110 Seoul 10001110 from NP E1 1101100011 officials b) replace word .. NP E2 NNP 1111100110 c) above pos .. NP E2 1111100110 NNP Seoul Figure 2: Integrating Brown cluster information The Brown algorithm (Brown et al., 1992) is a hierarchical agglomerative hard-clustering algorithm. The path from the root of the tree down to a leaf node is represented compactly as a bitstring. By cutting the hierarchy at different levels one can obtain different granularities of word clusters. We 1501 evaluate different ways to integrate cluster information into tree kernels, some of which are illustrated in Figure 2. For LSA, we compute term similarity functions following the distributional hypothesis (Harris, 1964), i.e. the meaning of a word can be described by the set of textual contexts in which it appears. The original word-by-word context matrix M is decomposed through Singular Value Decomposition (SVD) (Golub and Kahan, 1965), where M is approximated by UlSlV T l . This approximation supplies a way to project a generic term wi into the l-dimensional space using W = UlS1/2 l , where each row corresponds to the vectors ⃗wi. Given two words w1 and w2, the term similarity function σ is estimated as the cosine similarity between the corresponding projections ⃗w1, ⃗w2 and used in the kernel as described in Section 2. 5 Experimental Setup We treat relation extraction as a multi-class classification problem and use SVM-light-TK4 to train the binary classifiers. The output of the classifiers is combined using the one-vs-all approach. We modified the SVM-light-TK package to include the semantic tree kernels and instance weighting. The entire software package is publicly available.5 For the SVMs, we use the same parameters as Zhang et al. (2006): λ = 0.4, c = 2.4 using the Collins Kernel (Collins and Duffy, 2001). The precision/recall trade-off parameter for the none class was found on held-out data: j = 0.2. Evaluation metrics are standard micro average Precision, Recall and balanced Fscore (F1). To compute statistical significance, we use the approximate randomization test (Noreen, 1989).6 In all our experiments, we model argument order of the relations explicitly. Thus, for instance for the 7 coarse ACE 2004 relations, we build 14 coarse-grained classifiers (two for each coarse ACE 2004 relation type except for PER-SOC, which is symmetric, and one classifier for the none relation). Data We use two datasets. To compare our model against the state of the art we use the ACE 2004 data. It contains 348 documents and 4,374 positive relation instances. To generate the training data, we follow prior studies and extract an instance for every pair of mentions in the same 4http://disi.unitn.it/moschitti/Tree-Kernel.htm 5http://disi.unitn.it/ikernels/RelationExtraction 6http://www.nlpado.de/˜sebastian/software/sigf.shtml sentence, which are separated by no more than three other mentions (Zhang et al., 2006; Sun et al., 2011). After data preprocessing, we obtained 4,327 positive and 39,120 negative instances. ACE 2005 docs sents ASL relations nw+bn 298 5029 18.8 3562 bc 52 2267 16.3 1297 cts 34 2696 15.3 603 wl 114 1697 22.6 677 Table 1: Overview of the ACE 2005 data. For the domain adaptation experiments we use the ACE 2005 corpus. An overview of the data is given in Table 1. Note that this data is different from ACE 2004: it covers different years (ACE 2004: texts from 2001-2002; ACE 2005: 2003-2005). Moreover, the annotation guidelines have changed (for example, ACE 2005 contains no discourse relation, some relation (sub)types have changed/moved, and care must be taken for differences in SGM markup, etc.). More importantly, the ACE 2005 corpus covers additional domains: weblogs, telephone conversation, usenet and broadcast conversation. In the experiments, we use news (the union of nw and bn) as source domain, and weblogs (wl), telephone conversations (cts) and broadcast conversation (bc) as target domains.7 We take half of bc as only target development set, and leave the remaining data and domains for final testing (since they are already small, cf. Table 1). To get a feeling of how these domains differ, Figure 3 depicts the distribution of relations in each domain and Table 2 provides the most frequent out-of-vocabulary words together with their percentage. Lexical Similarity and Clustering We applied LSA to ukWaC (Baroni et al., 2009), a 2 billion word corpus constructed from the Web8 using the s-space toolkit.9 Dimensionality reduction was performed using SVD with 250 dimensions, following (Croce et al., 2011). The co-occurrence matrix was transformed by tfidf. For the Brown word clusters, we used Percy Liang’s implementation10 of the Brown clustering algorithm (Liang, 2005). We incorporate cluster information by us7We did not consider the usenet subpart, since it is among the smaller domains and data-preprocessing was difficult. 8http://wacky.sslmit.unibo.it/ 9http://code.google.com/p/airhead-research/ 10https://github.com/percyliang/brown-cluster 1502 nw_bn bc cts wl ART GEN−AFF ORG−AFF PART−WHOLE PER−SOC PHYS Distribution of relations across domains (normalized) Domain Proportion 0.0 0.1 0.2 0.3 0.4 Figure 3: Distribution of relations in ACE 2005. Dom Most frequent OOV words bc (24%) insurance, unintelligible, malpractice, ph, clip, colonel, crosstalk cts (34%) uh, Yeah, um, eh, mhm, uh-huh, ˜, ah, mm, th, plo, topic, y, workplace wl (49%) title, Starbucks, Well, blog, !!, werkheiser, undefeated, poor, shit Table 2: For each domain the percentage of target domain words (types) that are unseen in the source together with the most frequent OOV words. ing the 10-bit cluster prefix (Sun et al., 2011; Chan and Roth, 2010). For the domain adaptation experiments, we use ukWaC corpus-induced clusters as bridge between domains. We limited the vocabulary to that in ACE 2005, which are approximately 16k words. Following previous work, we left case intact in the corpus and induced 1,000 word clusters from words appearing at least 100 times.11 DA baseline We compare our approach to instance weighting (Jiang and Zhai, 2007). We modified SVM-light-TK such that it takes a parameter vector βi, .., βm as input, where each βi represents the relative importance of example i with respect to the target domain (Huang et al., 2007; Widmer, 2008). To estimate the importance weights, we train a binary classifier that distinguishes between source and target domain instances. We consider the union of the three target domains as target data. To train the classifier, the source instances are marked as negative and the target instances are marked as positive. Then, this classi11Clusters are available at http://disi.unitn.it/ikernels/ RelationExtraction Prior Work: Type P R F1 Zhang (2006), tree only K,yes 74.1 62.4 67.7 Zhang (2006), linear K,yes 73.5 67.0 70.1 Zhang (2006), poly K,yes 76.1 68.4 72.1 Sun & Grishman (2011) F,yes 73.4 67.7 70.4 Jiang & Zhai (2007) F,no 73.4 70.2 71.3 Our re-implementation: Type P R F1 Tree only (PET Zhang) K,yes 70.7 62.5 66.3 Linear composite K,yes 71.3 66.6 68.9 Polynomial composite K,yes 72.6 67.7 70.1 Table 3: Comparison to previous work on the 7 relations of ACE 2004. K: kernel-based; F: featurebased; yes/no: models argument order explicitly. fier is applied to the source data. To obtain the weights βi, we convert the SVM scores into posterior probabilities by training a sigmoid using the modified Platt algorithm (Lin et al., 2007).12 6 Results 6.1 Alignment to Prior Work Although most prior studies performed 5-fold cross-validation on ACE 2004, it is often not clear whether the partitioning has been done on the instance or on the document level. Moreover, it is often not stated whether argument order is modeled explicitly, making it difficult to compare system performance. Citing Wang (2008), “We feel that there is a sense of increasing confusion down this line of research”. To ease comparison for future research we use the same 5-fold split on the document level as Sun et al. (2011)13 and make our system publicly available (see Section 5). Table 3 shows that our system (bottom) aligns well with the state of the art. Our best system (composite kernel with polynomial expansion) reaches an F1 of 70.1, which aligns well to the 70.4 of Sun et al. (2011) that use the same datasplit. This is slightly behind that of Zhang (2006); the reason might be threefold: i) different data partitioning; ii) different pre-processing; iii) they incorporate features from additional sources, i.e. a phrase chunker, dependency parser and semantic resources (Zhou et al., 2005) (we have on average 9 features/instance, they use 40). Since we focus on evaluating the impact of semantic similarity in tree kernels, we think our system is very competitive. Removing gold entity and mention 12Other weightings/normalizations (like LDA) didn’t improve the results; best was to take the posteriors and add c. 13http://cs.nyu.edu/˜asun/pub/ACL11_CVFileList.txt 1503 information results in a significant F1 drop from 66.3% to 54.2%. However, in a realistic setting we do not have gold entity info available, especially not in the case when we apply the system to any kind of text. Thus, in the domain adaptation setup we assume entity boundaries given but not their label. Clearly, evaluating the approach on predicted mentions, e.g. Giuliano et al. (2007), is another important dimension, however, out of the scope of the current paper. 6.2 Tree Kernels with Brown Word Clusters To evaluate the effectiveness of Brown word clusters in tree kernels, we evaluated different instance representations (cf. Figure 2) on the ACE 2005 development set. Table 4 shows the results. bc-dev P R F1 baseline 52.2 41.7 46.4 replace word 49.7 38.6 43.4 replace pos 56.3 41.9 48.0 replace pos only mentions 55.3 41.6 47.5 above word 54.5 42.2 47.6 above pos 55.8 41.1 47.3 Table 4: Brown clusters in tree kernels (cf. Fig 2). To summarize, we found: i) it is generally a bad idea to dismiss lexical information completely, i.e. replacing or ignoring terminals harms performance; ii) the best way to incorporate Brown clusters is to replace the Pos tag with the cluster bitstring; iii) marking all words is generally better than only mentions; this is in contrast to Sun et al. (2011) who found that in their feature-based system it was better to add cluster information to entity mentions only. As we will discuss, the combination of syntax and semantics exploited in this novel kernel avoids the necessity of restricting cluster information to mentions only. 6.3 Semantic Tree Kernels for DA To evaluate the effectiveness of the proposed kernels across domains, we use the ACE 2005 data as testbed. Following standard practices on ACE 2004, the newswire (nw) and broadcast news (bn) data from ACE 2005 are considered training data (labeled source domain). The test data consists of three targets: broadcast conversation, telephone conversation, weblogs. As we want to build a single system that is able to handle heterogeneous data, we do not assume that there is further unlabeled domain-specific data, but we assume to have a large unlabeled corpus (ukWaC) at our disposal to improve the generalizability of our models. Table 5 presents the results. In the first three rows we see the performance of the baseline models (PET, BOW and BOW without marking). In-domain (col 1): when evaluated on the same domain the system was trained on (nw+bn, 5-fold cross-validation). Out-of-domain performance (cols 2-4): the system evaluated on the targets, namely broadcast conversation (bc), telephone conversation (cts) and weblogs (wl). While the system achieves a performance of 46.0 F1 within its own domain, the performance drops to 45.3, 43.4 and 34.0 F1 on the target domains, respectively. The BOW kernel that disregards syntax is often less effective (row 2). We see also the effect of target entity marking: the BOW kernel without entity marking performs substantially worse (row 3). For the remaining experiments we use the BOW kernel with entity marking. Rows 4 and 5 of Table 5 show the effect of using instance weighting for the PET baseline. Two models are shown: they differ in whether PET or BOW was used as instance representation for training the discriminative classifier. Instance weighting shows mixed results: it helps slightly on the weblogs domain, but does not help on broadcast conversation and telephone conversations. Interestingly, the two models used to obtain the weights perform similarly, despite the fact that their performance differs (F1: 70.5 BOW, 73.5 PET); it turns out that the correlation between the weights is high (+0.82). The next part (rows 6-9) shows the effect of enriching the syntactic structures with either Brown word clusters or LSA. The Brown cluster kernel applied to PET (P WC) improves performance over the baseline over all target domains. The same holds also for the lexical semantic kernel based on LSA (P LSA), however, to only two out of three domains. This suggests that the two kernels capture different information and a combined kernel might be effective. More importantly, the table shows the effect of adding Brown clusters or LSA semantics to the BOW kernel: it can actually hurt performance, sometimes to a small but other times to a considerably degree. For instance, WC applied to PET achieves an F1 of 47.0 (baseline: 45.3) on the bc domain, while applied to BOW it hurts performance significantly, i.e. it drops from 1504 nw+bn (in-dom.) bc cts wl Baseline: P: R: F1: P: R: F1: P: R: F1: P: R: F1: PET 50.6 42.1 46.0 51.2 40.6 45.3 51.0 37.8 43.4 35.4 32.8 34.0 BOW 55.1 37.3 44.5 57.2 37.1 45.0 57.5 31.8 41.0 41.1 27.2 32.7 BOW no marking 49.6 34.6 40.7 51.5 34.7 41.4 54.6 30.7 39.3 37.6 25.7 30.6 PET adapted: P: R: F: P: R: F: P: R: F: P: R: F: IW1 (using PET) 51.4 44.1 47.4 49.1 41.1 44.7 50.8 37.5 43.1 35.5 33.9 34.7 IW2 (using BOW) 51.2 43.6 47.1 49.1 41.3 44.9 51.2 37.8 43.5 35.6 33.8 34.7 With Similarity: P: R: F1: P: R: F1: P: R: F1: P: R: F1: P WC 55.4 44.6 49.4 54.3 41.4 47.0 55.9 37.1 44.6 40.0 32.7 36.0 B WC 47.9 36.4 41.4 49.5 35.2 41.2 53.3 33.2 40.9 31.7 24.1 27.4 P LSA 52.3 44.1 47.9 51.4 41.7 46.0 49.7 36.5 42.1 38.1 36.5 37.3 B LSA 53.7 37.8 44.4 55.1 33.8 41.9 54.9 32.3 40.7 39.2 28.6 33.0 P+P WC 55.0 46.5 50.4 54.4 43.4 48.3 54.1 38.1 44.7 38.4 34.5 36.3 P+P LSA 52.7 46.6 49.5 53.9 45.2 49.2 49.9 37.6 42.9 37.9 38.3 38.1 P+P WC+P LSA 55.1 45.9 50.1 55.3 43.1 48.5† 53.1 37.0 43.6 39.9 35.8 37.8† Table 5: In-domain (first column) and out-of-domain performance (columns two to four) on ACE 2005. PET and BOW are abbreviated by P and B, respectively. If not specified BOW is marked. 45.0 to 41.2. This is also the case for LSA applied to the BOW kernel, which drops to 41.9. On the cts domain this is less pronounced. Only on the weblogs domain B LSA achieves a minor improvement (from 32.7 to 33.0). In general, distributional semantics constrained by syntax (i.e. combined with PET) can be effectively exploited, while if applied ‘blindly’ – without the guide of syntax (i.e. BOW) – performance might drop, often considerably. We believe that the semantic information does not help the BOW kernel as there is no syntactic information that constrains the application of the noisy source, as opposed to the case with the PET kernel. As the two semantically enriched kernels, PET LSA and PET WC, seem to capture different information we use composite kernels (rows 1011): the baseline kernel (PET) summed with the lexical semantic kernels. As we can see, results improve further: for instance on the bc test set, PET WC reaches an F1 of 47.0, while combined with PET (PET+PET WC) this improves to 48.3. Adding also PET LSA results in the best performance and our final system (last row): the composite kernel (PET+PET WC+PET LSA) reaches an F1 of 48.5, 43.6 and 37.8 on the target domains, respectively, i.e. with an absolute improvement of: +3.2%, +0.2% and +3.8%, respectively. Two out of three improvements are significant at p < 0.05 (indicated by † in Table 5). Moreover, the system also improved in its own domain (first column), therefore having achieved robustness. By performing an error analysis we found that, for instance, the Brown clusters help to generalize locations and professions. For example, the baseline incorrectly considered ‘Dutch filmmaker’ in a PART-WHOLE relation, while our system correctly predicted GEN-AFF(filmmaker,Dutch). ‘Filmmaker’ does not appear in the source, however ‘Dutch citizen’ does. Both ‘citizen’ and ‘filmmaker’ appear in the same cluster, thereby helping the system to recover the correct relation. bc cts wl Relation: BL SYS BL SYS BL SYS PART-WHOLE 37.8 43.1 59.3 52.3 30.5 36.3 ORG-AFF 60.7 62.9 35.5 42.3 41.0 42.0 PHYS 35.3 37.6 25.4 28.7 25.2 26.9 ART 20.8 37.9 34.5 43.5 26.5 40.3 GEN-AFF 30.1 33.0 16.8 18.6 21.6 28.1 PER-SOC 74.1 74.2 66.3 63.1 42.6 48.0 µ average 45.3 48.5 43.4 43.6 34.0 37.8 Table 6: F1 per coarse relation type (ACE 2005). SYS is the final model, i.e. last row (PET+PET WC+PET LSA) of Table 5. Furthermore, Table 6 provides the performance breakdown per relation for the baseline (BL) and our best system (SYS). The table shows that our system is able to improve F1 on all relations for the broadcast and weblogs data. On most relations, this is also the case for the telephone (cts) data, although the overall improvement is not significant. Most errors were made on the PER-SOC 1505 relation, which constitutes the largest portion of cts (cf. Figure 3). As shown in the same figure, the relation distribution of the cts domain is also rather different from the source. This conversation data is a very hard domain, with a lot of disfluencies and spoken language patterns. We believe it is more distant from the other domains, especially from the unlabeled collection, thus other approaches might be more appropriate, e.g. domain identification (Dredze et al., 2010). 7 Conclusions and Future Work We proposed syntactic tree kernels enriched by lexical semantic similarity to tackle the portability of a relation extractor to different domains. The results of diverse kernels exploiting (i) Brown clustering and (ii) LSA show that a suitable combination of syntax and lexical generalization is very promising for domain adaptation. The proposed system is able to improve performance significantly on two out of three target domains (up to 8% relative improvement). We compared it to instance weighting, which gave only modest or no improvements. Brown clusters remained unexplored for kernel-based approaches. We saw that adding cluster information blindly might actually hurt performance. In contrast, adding lexical information combined with syntax can help to improve performance: the syntactic structure enriched with lexical information provides a feature space where syntax constrains lexical similarity obtained from unlabeled data. Thus, semantic syntactic tree kernels appear to be a suitable mechanism to adequately trade off the two kinds of information. In future we plan to extend the evaluation to predicted mentions, which necessarily includes a careful evaluation of pre-processing components, as well as evaluating the approach on other semantic tasks. Acknowledgments We would like to thank Min Zhang for discussions on his prior work as well as the anonymous reviewers for their valuable feedback. The research described in this paper has been supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under the grant #288024: LIMOSINE – Linguistically Motivated Semantic aggregation engiNes. References Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky Wide Web: A Collection of Very Large Linguistically Processed Web-Crawled Corpora. Language Resources and Evaluation, pages 209–226. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain Adaptation with Structural Correspondence Learning. In Conference on Empirical Methods in Natural Language Processing, Sydney, Australia. Stephan Bloehdorn and Alessandro Moschitti. 2007a. Combined syntactic and semantic kernels for text classification. In ECIR, pages 307–318. Stephan Bloehdorn and Alessandro Moschitti. 2007b. Exploiting Structure and Semantics for Expressive Text Kernels. In Conference on Information Knowledge and Management, Lisbon, Portugal. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-Based n-gram Models of Natural Language. Computational Linguistics, 18:467–479. Razvan C. Bunescu. 2007. Learning to extract relations from the web using minimal supervision. In Proceedings of ACL. Yee Seng Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 152–160, Beijing, China, August. Coling 2010 Organizing Committee. Michael Collins and Nigel Duffy. 2001. Convolution Kernels for Natural Language. In Proceedings of Neural Information Processing Systems (NIPS 2001). Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Semantic convolution kernels over dependency trees: smoothed partial tree kernel. In CIKM, pages 2013–2016. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the 42nd Annual Meeting on ACL, Barcelona, Spain. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of ACL, pages 256–263, Prague, Czech Republic, June. Mark Dredze, Tim Oates, and Christine Piatko. 2010. We’re not in kansas anymore: Detecting domain changes in streams. In Proceedings of EMNLP, pages 585–595, Cambridge, MA. Claudio Giuliano, Alberto Lavelli, and Lorenza Romano. 2007. Relation extraction and the influence of automatic named-entity recognition. ACM Trans. Speech Lang. Process., 5(1):2:1–2:26, December. 1506 G. Golub and W. Kahan. 1965. Calculating the singular values and pseudo-inverse of a matrix. Journal of the Society for Industrial and Applied Mathematics: Series B, Numerical Analysis, 2(2):pp. 205–224. Zellig Harris. 1964. Distributional structure. In Jerrold J. Katz and Jerry A. Fodor, editors, The Philosophy of Linguistics. Oxford University Press. Jiayuan Huang, Arthur Gretton, Bernhard Sch¨olkopf, Alexander J. Smola, and Karsten M. Borgwardt. 2007. Correcting sample selection bias by unlabeled data. In In NIPS. MIT Press. Jing Jiang and Chengxiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In In ACL 2007, pages 264–271. Jing Jiang. 2009. Multi-task transfer learning for weakly-supervised relation extraction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pages 1012–1020, Suntec, Singapore. Percy Liang. 2005. Semi-Supervised Learning for Natural Language. Master’s thesis, Massachusetts Institute of Technology. Hsuan-Tien Lin, Chih-Jen Lin, and Ruby C. Weng. 2007. A note on platt’s probabilistic outputs for support vector machines. Mach. Learn., 68(3):267– 276. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Machine Learning Research, pages 419–444. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACL-IJCNLP, pages 1003–1011, Suntec, Singapore, August. Alessandro Moschitti. 2004. A study on convolution kernels for shallow semantic parsing. In Proceedings of the 42nd Meeting of the ACL, Barcelona, Spain. Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In Proceedings of the 17th ECML, Berlin, Germany. Alessandro Moschitti. 2008. Kernel methods, syntax and semantics for relational text categorization. In CIKM, pages 253–262. Truc-Vien T. Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In Proceedings of EMNLP ’09, pages 1378–1387, Stroudsburg, PA, USA. J. Nivre, J. Hall, S. K¨ubler, R. McDonald, J. Nilsson, S. Riedel, and D. Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLPCoNLL, pages 915–932. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. WileyInterscience. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press. Anders Søgaard and Martin Haulrich. 2011. Sentence-level instance-weighting for graph-based and transition-based dependency parsing. In Proceedings of the 12th International Conference on Parsing Technologies, IWPT ’11, pages 43–47, Stroudsburg, PA, USA. Anders Søgaard and Anders Johannsen. 2012. Robust learning in random subspaces: equipping NLP for OOV effects. In Proceedings of Coling. Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In Proceedings of ACL-HLT, pages 521–529, Portland, Oregon, USA. Mengqiu Wang. 2008. A re-examination of dependency path kernels for relation extraction. In Proceedings of the 3rd International Joint Conference on Natural Language Processing-IJCNLP. Christian Widmer. 2008. Domain adaptation in sequence analysis. Diplomarbeit, University of T¨ubingen. Feiyu Xu, Hans Uszkoreit, Hond Li, and Niko Felger. 2008. Adaptation of relation extraction rules to new domains. In Proceedings of LREC’08, Marrakech, Morocco. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proceedings of EMNLP-ACL, pages 181–201. Min Zhang, Jian Su, Danmei Wang, Guodong Zhou, and Chew Lim Tan. 2005. Discovering relations between named entities from a large raw corpus using tree similarity-based clustering. In Proceedings of IJCNLP’2005, pages 378–389, Jeju Island, South Korea. Min Zhang, Jie Zhang, Jian Su, and Guodong Zhou. 2006. A composite kernel to extract relations between entities with both flat and structured features. In Proceedings of COLING-ACL 2006, pages 825– 832. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of ACL), pages 427–434, Ann Arbor, Michigan. 1507
2013
147
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1508–1516, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A joint model of word segmentation and phonological variation for English word-final /t/-deletion Benjamin B¨orschinger1,3 and Mark Johnson1 and Katherine Demuth2 (1) Department of Computing, Macquarie University (2) Department of Linguistics, Macquarie University (3) Department of Computational Linguistics, Heidelberg University {benjamin.borschinger, mark.johnson, katherine.demuth}@mq.edu.au Abstract Word-final /t/-deletion refers to a common phenomenon in spoken English where words such as /wEst/ “west” are pronounced as [wEs] “wes” in certain contexts. Phonological variation like this is common in naturally occurring speech. Current computational models of unsupervised word segmentation usually assume idealized input that is devoid of these kinds of variation. We extend a non-parametric model of word segmentation by adding phonological rules that map from underlying forms to surface forms to produce a mathematically well-defined joint model as a first step towards handling variation and segmentation in a single model. We analyse how our model handles /t/-deletion on a large corpus of transcribed speech, and show that the joint model can perform word segmentation and recover underlying /t/s. We find that Bigram dependencies are important for performing well on real data and for learning appropriate deletion probabilities for different contexts.1 1 Introduction Computational models of word segmentation try to solve one of the first problems language learners have to face: breaking an unsegmented stream of sound segments into individual words. Currently, most such models assume that the input consists of sequences of phonemes with no pronunciation variation across different occurrences of the same word type. In this paper we describe 1The implementation of our model as well as scripts to prepare the data will be made available at http://web.science.mq.edu.au/~bborschi. We can’t release our version of the Buckeye Corpus (Pitt et al., 2007) directly because of licensing issues. an extension of the Bayesian models of Goldwater et al. (2009) that incorporates phonological rules to “explain away” surface variation. As a concrete example, we focus on word-final /t/deletion in English, although our approach is not limited to this case. We choose /t/-deletion because it is a very common and well-studied phenomenon (see Coetzee (2004, Chapter 5) for a review) and segmental deletion is an interesting test-case for our architecture. Recent work has found that /t/-deletion (among other things) is indeed common in child-directed speech (CDS) and, importantly, that its distribution is similar to that in adult-directed speech (ADS) (Dilley et al., to appear). This justifies our using ADS to evaluate our model, as discussed below. Our experiments are consistent with longstanding and recent findings in linguistics, in particular that /t/-deletion heavily depends on the immediate context and that models ignoring context work poorly on real data. We also examine how well our models identify the probability of /t/deletion in different contexts. We find that models that capture bigram dependencies between underlying forms provide considerably more accurate estimates of those probabilities than corresponding unigram or “bag of words” models of underlying forms. In section 2 we discuss related work on handling variation in computational models and on /t/deletion. Section 3 describes our computational model and section 4 discusses its performance for recovering deleted /t/s. We look at both a situation where word boundaries are pre-specified and only inference for underlying forms has to be performed; and the problem of jointly finding the word boundaries and recovering deleted underlying /t/s. Section 5 discusses our findings, and section 6 concludes with directions for further research. 1508 2 Background and related work The work of Elsner et al. (2012) is most closely related to our goal of building a model that handles variation. They propose a pipe-line architecture involving two separate generative models, one for word-segmentation and one for phonological variation. They model the mapping to surface forms using a probabilistic finite-state transducer. This allows their architecture to handle virtually arbitrary pronunciation variation. However, as they point out, combining the segmentation and the variation model into one joint model is not straight-forward and usual inference procedures are infeasible, which requires the use of several heuristics. We pursue an alternative research strategy here, starting with a single well-studied example of phonological variation. This permits us to develop a joint generative model for both word segmentation and variation which we plan to extend to handle more phenomena in future work. An earlier work that is close to the spirit of our approach is Naradowsky and Goldwater (2009), who learn spelling rules jointly with a simple stem-suffix model of English verb morphology. Their model, however, doesn’t naturally extend to the segmentation of entire utterances. /t/-deletion has received a lot of attention within linguistics, and we point the interested reader to Coetzee (2004, Chapter 5) for a thorough review. Briefly, the phenomenon is as follows: word-final instances of /t/ may undergo deletion in natural speech, such that /wEst/ “west” is actually pronounced as [wEs] “wes”.2 While the frequency of this phenomenon varies across social and dialectal groups, within groups it has been found to be robust, and the probability of deletion depends on its phonological context: a /t/ is more likely to be dropped when followed by a consonant than a vowel or a pause, and it is more likely to be dropped when following a consonant than a vowel as well. We point out two recent publications that are of direct relevance to our research. Dilley et al. (to appear) study word-final variation in stop consonants in CDS, the kind of input we ideally would like to evaluate our models on. They find that “infants largely experience statistical distributions of non-canonical consonantal pronunciation variants [including deletion] that mirror those experienced by adults.” This both directly establishes the need 2Following the convention in phonology, we give underlying forms within “/.../” and surface forms within “[. . . ]”. for computational models to handle this dimension of variation, and justifies our choice of using ADS for evaluation, as mentioned above. Coetzee and Kawahara (2013) provide a computational study of (among other things) /t/deletion within the framework of Harmonic Grammar. They do not aim for a joint model that also handles word segmentation, however, and rather than training their model on an actual corpus, they evaluate on constructed lists of examples, mimicking frequencies of real data. Overall, our findings agree with theirs, in particular that capturing the probability of deletion in different contexts does not automatically result in good performance for recovering individual deleted /t/s. We will come back to this point in our discussion at the end of the paper. 3 The computational model Our models build on the Unigram and the Bigram model introduced in Goldwater et al. (2009). Figure 1 shows the graphical model for our joint Bigram model (the Unigram case is trivially recovered by generating the Ui,js directly from L rather than from LUi,j−1). Figure 2 gives the mathematical description of the graphical model and Table 1 provides a key to the variables of our model. The model generates a latent sequence of underlying word-tokens U1, . . . , Un. Each word token is itself a non-empty sequence of segments or phonemes, and each Uj corresponds to an underlying word form, prior to the application of any phonological rule. This generative process is repeated for each utterance i, leading to multiple utterances of the form Ui,1, . . . , Ui,ni where ni is the number of words in the ith utterance, and Ui,j is the jth word in the ith utterance. Each utterance is padded by an observed utterance boundary symbol $ to the left and to the right, hence Ui,0 = Ui,ni+1 = $.3 Each Ui,j+1 is generated conditionally on its predecessor Ui,j from LUi,j, as shown in the first row of the lower plate in Figure 1. Each Lw is a distribution over the possible words that can follow a token of w and L is a global distribution over possible words, used as back-off for all Lw. Just as in Goldwater et al. (2009), L is drawn from a Dirichlet Process (DP) with base distribution B and concentration 3Each utterance terminates as soon as a $ is generated, thus determining the number of words ni in the ith utterance. See Goldwater et al. (2009) for discussion. 1509 Figure 1: The graphical model for our joint model of word-final /t/-deletion and Bigram word segmentation. The corresponding mathematical description is given in Figure 2. The generative process mimics the intuitively plausible idea of generating underlying forms from some kind of syntactic model (here, a Bigram language model) and then mapping the underlying form to an observed surface-form through the application of a phonological rule component, here represented by the collection of rule probabilities ρc. L | γ, α0 ∼DP(α0, B(· | γ)) Lw | L, α1 ∼DP(α1, L) ρc | β ∼Beta(1, 1) Ui,0 = $ Si,0 = $ Ui,j+1 | Ui,j, LUi,j ∼LUi,j Si,j | Ui,j, Ui,j+1, ρ = PR(· | Ui,j, Ui,j+1) Wi | Si,1, . . . , Si,ni = CAT(Si,0, . . . , Si,ni) Figure 2: Mathematical description of our joint Bigram model. The lexical generator B(· | γ) is specified in Figure 3 and PR is explained in the text below. CAT stands for concatenation without word-boundaries, ni refers to the number of words in utterance i. Variable Explanation B base distribution over possible words L back-off distribution over words Lw distribution over words following w Ui,j underlying form, a word Si,j surface realization of Ui,j, a word ρc /t/-deletion probability in context c Wi observed segments for ith utterance Table 1: Key for the variables in Figure 1 and Figure 2. See Figure 3 for the definition of B. parameter α0, and the word type specific distributions Lw are drawn from a DP(L, α1), resulting in a hierarchical DP model (Teh et al., 2006). The base distribution B functions as a lexical generator, defining a prior distribution over possible words. In principle, B can incorporate arbitrary prior knowledge about possible words, for example syllable structure (cf. Johnson (2008)). Inspired by Norris et al. (1997), we use a simpler possible word constraint that only rules out sequences that lack a vowel (see Figure 3). While this is clearly a simplification it is a plausible assumption for English data. Instead of generating the observed sequence of segments W directly by concatenating the underlying forms as in Goldwater et al. (2009), we map each Ui,j to a corresponding surface-form Si,j by a probabilistic rule component PR. The values over which the Si,j range are determined by the available phonological processes. In the model we study here, the phonological processes only include a rule for deleting word-final /t/s but in principle, PR can be used to encode a wide variety of phonological rules. Here, Si,j ∈ {Ui,j, DELF(Ui,j)} if Ui,j ends in a /t/, and Si,j = Ui,j otherwise, where DELF(u) refers to the same word as u except that it lacks u’s final segment. We look at three kinds of contexts on which a rule’s probability of applying depends: 1. a uniform context that applies to every wordfinal position 2. a right context that also considers the following segment 3. a left-right context that additionally takes the preceeding segment into account For each possible context c there is a probability ρc which stands for the probability of the rule applying in this context. Writing 1510 γ ∼Dir(⟨0.01, . . . , 0.01⟩) B(w = x1:n | γ) = ([ Qn i=1 γxi]γ# Z if V(w) 0.0 if ¬V(w) Figure 3: Lexical generator with possible wordconstraint for words in Σ+, Σ being the alphabet of available phonemes. x1:n is a sequence of elements of Σ of length n. γ is a probability vector of length |Σ| + 1 drawn from a sparse Dirichlet prior, giving the probability for each phoneme and the special word-boundary symbol #. The predicate V holds of all sequences containing at least one vowel. Z is a normalization constant that adjusts for the mass assigned to the empty and nonpossible words. contexts in the notation familiar from generative phonology (Chomsky and Halle, 1968), our model can be seen as implementing the following rules under the different assumptions:4 uniform /t/ → ∅ / ]word right /t/ → ∅ / ]word β left-right /t/ → ∅ / α ]word β We let β range over V(owel), C(onsonant) and $ (utterance-boundary), and α over V and C. We define a function CONT that maps a pair of adjacent underlying forms Ui,j, Ui,j+1 to the context of the final segment of Ui,j. For example, CONT(/wEst/,/@v/) returns “C ]word V” in the left-right setting, or simply “ ]word” in the uniform setting. CONT returns a special NOT context if Ui,j doesn’t end in a /t/. We stipulate that ρNOT = 0.0. Then we can define PR as follows: PR(DELFINAL(u) | u, r)) = ρCONT(u,r) PR(u | u, r) = 1 −ρCONT(u,r) Depending on the context setting used, our model includes one (uniform), three (right) or six (left-right) /t/-deletion probabilities ρc. We place a uniform Beta prior on each of those so as to learn their values in the LEARN-ρ experiments below. Finally, the observed unsegmented utterances Wi are generated by concatenating all Si,j using the function CAT. We briefly comment on the central intuition of this model, i.e. why it can infer underlying 4For right there are three and for left-right six different rules, one for every instantiation of the context-template. from surface forms. Bayesian word segmentation models try to compactly represent the observed data in terms of a small set of units (word types) and a short analysis (a small number of word tokens). Phonological rules such as /t/-deletion can “explain away” an observed surface type such as [wEs]] in terms of the underlying type /wEst/ which is independently needed for surface tokens of [wEst]. Thus, the /t/→∅rule makes possible a smaller lexicon for a given number of surface tokens. Obviously, human learners have access to additional cues, such as the meaning of words, knowledge of phonological similarity between segments and so forth. One of the advantages of an explicitly defined generative model such as ours is that it is straight-forward to gradually extend it by adding more cues, as we point out in the discussion. 3.1 Inference Just as for the Goldwater et al. (2009) segmentation models, exact inference is infeasible for our joint model. We extend the collapsed Gibbs breakpoint-sampler described in Goldwater et al. (2009) to perform inference for our extended models. We refer the reader to their paper for additional details such as how to calculate the Bigram probabilities in Figure 4. Here we focus on the required changes to the sampler so as to perform inference under our richer model. We consider the case of a single surface string W, so we drop the i-index in the following discussion. Knowing W, the problem is to recover the underlying forms U1, . . . , Un and the surface forms S1, . . . , Sn for unknown n. A major insight in Goldwater’s work is that rather than sampling over the latent variables in the model directly (the number of which we don’t even know), we can instead perform Gibbs sampling over a set of boundary variables b1, . . . , b|W|−1 that jointly determine the values for our variables of interest where |W| is the length of the surface string W. For our model, each bj ∈{0, 1, t}, where bj = 0 indicates absence of a word boundary, bj = 1 indicates presence of a boundary and bj = t indicates presence of a boundary with a preceeding underlying /t/. The relation between the bj and the S1, . . . , Sn and U1, . . . , Un is illustrated in Figure 5. The required sampling equations are given in Figure 4. 1511 P(bj = 0 | b−j) ∝P(w12,u | wl,u, b−j) × Pr(w12,s | w12,u, wr,u) × P(wr,u | w12,u, b−j ⊕⟨wl,u, w12,u⟩) (1) P(bj = t | b−j) ∝P(w1,t | wl,u, b−j) × Pr(w1,s | w1,t, w2,u) × P(w2,u | w1,t, b−j ⊕⟨wl,u, w1,t⟩) × Pr(w2,s | w2,u, wr,u) × P(wr,u | w2,u, b−j ⊕⟨wl,u, w1,t⟩⊕⟨w1,t, w2,u⟩) (2) P(bj = 1 | b−j) ∝P(w1,s | wl,u, b−j) × Pr(w1,s | w1,s, w2,u) × P(w2,u | w1,s, b−j ⊕⟨wl,u, w1,s⟩) × Pr(w2,s | w2,u, wr,u) × P(wr,u | w2,u, b−j ⊕⟨wl,u, w1,s⟩⊕⟨w1,s, w2,u⟩) (3) Figure 4: Sampling equations for our Gibbs sampler, see figure 5 for illustration. bj = 0 corresponds to no boundary at this position, bj = t to a boundary with a preceeding underlying /t/ and bj = 1 to a boundary with no additional underlying /t/. We use b−j for the statistics determined by all but the jth position and b−j ⊕⟨r, l⟩for these statistics plus an additional count of the bigram ⟨r, l⟩. P(w | l, b) refers to the bigram probability of ⟨l, w⟩given the the statistics b; we refer the reader to Goldwater et al. (2009) for the details of calculating these bigram probabilities and details about the required statistics for the collapsed sampler. PR is defined in the text. 1 1 0 t 1 I h i i t $ underlying surface boundaries observed I h i i t $ I h i t i t $ Figure 5: The relation between the observed sequence of segments (bottom), the boundary variables b1, . . . , b|W|−1 the Gibbs sampler operates over (in squares), the latent sequence of surface forms and the latent sequence of underlying forms. When sampling a new value for b3 = t, the different word-variables in figure 4 are: w12,u=w12,s=hiit, w1,t=hit and w1,s=hi, w2,u=w2,s=it, wl,u=I, wr,u=$. Note that we need a boundary variable at the end of the utterance as there might be an underlying /t/ at this position as well. The final boundary variable is set to 1, not t, because the /t/ in it is observed. 4 Experiments 4.1 The data We are interested in how well our model handles /t/-deletion in real data. Ideally, we’d evaluate it on CDS but as of now, we know of no available large enough corpus of accurately handtranscribed CDS. Instead, we used the Buckeye Corpus (Pitt et al., 2007) for our experiments, a large ADS corpus of interviews with English speakers that have been transcribed with relatively fine phonetic detail, with /t/-deletion among the things manually annotated. Pointing to the recent work by Dilley et al. (to appear) we want to emphasize that the statistical distribution of /t/deletion has been found to be similar for ADS and orthographic I don’t intend to transcript /aI R oU n I n t E n d @/ idealized /aI d oU n t I n t E n d t U/ t-drop /aI d oU n I n t E n d t U/ Figure 6: An example fragment from the Buckeyecorpus in orthographic form, the fine transcript available in the Buckeye corpus, a fully idealized pronunciation with canonical dictionary pronunciations and our version of the data with dropped /t/s. CDS, at least for read speech. We automatically derived a corpus of 285,792 word tokens across 48,795 utterances from the Buckeye Corpus by collecting utterances across all interviews and heuristically splitting utterances at speaker-turn changes and indicated silences. The Buckeye corpus lists for each word token a manually transcribed pronunciation in context as well as its canonical pronunciation as given in a pronouncing dictionary. As input to our model, we use the canonical pronunciation unless the pronunciation in context indicates that the final /t/ has been deleted in which case we also delete the final /t/ of the canonical pronunciation Figure 6 shows an example from the Buckeye Corpus, indicating how the original data, a fully idealized version and our derived input that takes into account /t/deletions looks like. Overall, /t/-deletion is a quite frequent phenomenon with roughly 29% of all underlying /t/s being dropped. The probabilities become more peaked when looking at finer context; see Table 3 for the empirical distribution of /t/-dropping for the six different contexts of the left-right setting. 1512 4.2 Recovering deleted /t/s, given word boundaries In this set of experiments we are interested in how well our model recovers /t/s when it is provided with the gold word boundaries. This allows us to investigate the strength of the statistical signal for the deletion rule without confounding it with the word segmentation performance, and to see how the different contextual settings uniform, right and left-right handle the data. Concretely, for the example in Figure 6 this means that we tell the model that there are boundaries between /aI/, /doUn/, /IntEnd/, /tu/ and /liv/ but we don’t tell it whether or not these words end in an underlying /t/. Even in this simple example, there are 5 possible positions for the model to posit an underlying /t/. We evaluate the model in terms of F-score, the harmonic mean of recall (the fraction of underlying /t/s the model correctly recovered) and precision (the fraction of underlying /t/s the model predicted that were correct). In these experiments, we ran a total of 2500 iterations with a burnin of 2000. We collect samples with a lag of 10 for the last 500 iterations and perform maximum marginal decoding over these samples (Johnson and Goldwater, 2009), as well as running two chains so as to get an idea of the variance.5 We are also interested in how well the model can infer the rule probabilities from the data, that is, whether it can learn values for the different ρc parameters. We compare two settings, one where we perform inference for these parameters assuming a uniform Beta prior on each ρc (LEARN-ρ) and one where we provide the model with the empirical probabilities for each ρc as estimated off the gold-data (GOLD-ρ), e.g., for the uniform condition 0.29. The results are shown in Table 2. Best performance for both the Unigram and the Bigram model in the GOLD-ρ condition is achieved under the left-right setting, in line with the standard analyses of /t/-deletion as primarily being determined by the preceding and the following context. For the LEARN-ρ condition, the Bigram model still performs best in the left-right setting but the Unigram model’s performance drops 5As manually setting the hyper-parameters for the DPs in our model proved to be complicated and may be objected to on principled grounds, we perform inference for them under a vague gamma prior, as suggested by Teh et al. (2006) and Johnson and Goldwater (2009), using our own implementation of a slice-sampler (Neal, 2003). uniform right left-right Unigram LEARN-ρ 56.52 39.28 23.59 GOLD-ρ 62.08 60.80 66.15 Bigram LEARN-ρ 60.85 62.98 77.76 GOLD-ρ 69.06 69.98 73.45 Table 2: F-score of recovered /t/s with known word boundaries on real data for the three different context settings, averaged over two runs (all standard errors below 2%). Note how the Unigram model always suffers in the LEARN-ρ condition whereas the Bigram model’s performance is actually best for LEARN-ρ in the left-right setting. C C C V C $ V C V V V $ empirical 0.62 0.42 0.36 0.23 0.15 0.07 Unigram 0.41 0.33 0.17 0.07 0.05 0.00 Bigram 0.70 0.58 0.43 0.17 0.13 0.06 Table 3: Inferred rule-probabilities for different contexts in the left-right setting from one of the runs. “C C” stands for the context where the deleted /t/ is preceded and followed by a consonant, “V $” stands for the context where it is preceded by a vowel and followed by the utterance boundary. Note how the Unigram model severely under-estimates and the Bigram model slightly over-estimates the probabilities. in all settings and is now worst in the left-right and best in the uniform setting. In fact, comparing the inferred probabilities to the “ground truth” indicates that the Bigram model estimates the true probabilities more accurately than the Unigram model, as illustrated in Table 3 for the left-right setting. The Bigram model somewhat overestimates the probability for all post-consonantal contexts but the Unigram model severely underestimates the probability of /t/-deletion across all contexts. 4.3 Artificial data experiments To test our Gibbs sampling inference procedure, we ran it on artificial data generated according to the model itself. If our inference procedure fails to recover the underlying /t/s accurately in this setting, we should not expect it to work well on actual data. We generated our artificial data as follows. We transformed the sequence of canonical pronunciations in the Buckeye corpus (which we take to be underlying forms here) by randomly deleting final /t/s using empirical probabilities as shown in Table 3 to generate a sequence of artificial surface forms that serve as input to our models. We 1513 uniform right left-right Unigram LEARN-ρ 94.35 23.55 (+) 63.06 GOLD-ρ 94.45 94.20 91.83 Bigram LEARN-ρ 92.72 91.64 88.48 GOLD-ρ 92.88 92.33 89.32 Table 4: F-score of /t/-recovery with known word boundaries on artificial data, each condition tested on data that corresponds to the assumption, averaged over two runs (standard errors less than 2% except (+) = 3.68%)). Unigram Bigram LEARN-ρ 33.58 55.64 GOLD-ρ 55.92 57.62 Table 5: /t/-recovery F-scores when performing joint word segmention in the left-right setting, averaged over two runs (standard errors less than 2%). See Table 6 for the corresponding segmentation F-scores. did this for all three context settings, always estimating the deletion probability for each context from the gold-standard. The results of these experiments are given in table 4. Interestingly, performance on these artificial data is considerably better than on the real data. In particular the Bigram model is able to get consistently high F-scores for both the LEARN-ρ and the GOLD-ρ setting. For the Unigram model, we again observe the severe drop in the LEARN-ρ setting for the right and leftright settings although it does remarkably well in the uniform setting, and performs well across all settings in the GOLD-ρ condition. We take this to show that our inference algorithm is in fact working as expected. 4.4 Segmentation experiments Finally, we are also interested to learn how well we can do word segmentation and underlying /t/recovery jointly. Again, we look at both the LEARN-ρ and GOLD-ρ conditions but focus on the left-right setting as this worked best in the experiments above. For these experiments, we perform simulated annealing throughout the initial 2000 iterations, gradually cooling the temperature from 5 to 1, following the observation by Goldwater et al. (2009) that without annealing, the Bigram model gets stuck in sub-optimal parts of the solution space early on. During the annealing stage, we prevent the model from performing inference for underlying /t/s so that the annealing stage can be seen as an elaborate initialisation scheme, and we perform joint inference for the remaining 500 iterations, evaluating on the last sample and averaging over two runs. As neither the Unigram nor the Bigram model performs “perfect” word segmentation, we expect to see a degradation in /t/-recovery performance and this is what we find indeed. To give an impression of the impact of /t/-deletion, we also report numbers for running only the segmentation model on the Buckeye data with no deleted /t/s and on the data with deleted /t/s. The /t/-recovery scores are given in Table 5 and segmentation scores in Table 6. Again the Unigram model’s /t/-recovery score degrades dramatically in the LEARN-ρ condition. Looking at the segmentation performance this isn’t too surprising: the Unigram model’s poorer token Fscore, the standard measure of segmentation performance on a word token level, suggests that it misses many more boundaries than the Bigram model to begin with and, consequently, can’t recover any potential underlying /t/s at these boundaries. Also note that in the GOLD-ρ condition, our joint Bigram model performs almost as well on data with /t/-deletions as the word segmentation model on data that includes no variation at all. The generally worse performance of handling variation as measured by /t/-recovery F-score when performing joint segmentation is consistent with the finding of Elsner et al. (2012) who report considerable performance drops for their phonological learner when working with induced boundaries (note, however, that their model does not perform joint inference, rather the induced boundaries are given to their phonological learner as groundtruth). 5 Discussion There are two interesting findings from our experiments. First of all, we find a much larger difference between the Unigram and the Bigram model in the LEARN-ρ condition than in the GOLD-ρ condition. We suggest that this is due to the Unigram model’s lack of dependencies between underlying forms, depriving it of an important source of evidence. Bigram dependencies provide additional evidence for underlying /t/ that are deleted on the surface, and because the Bigram model identifies these underlying /t/ more accurately, it can also estimate the /t/ deletion probability more accurately. 1514 Unigram Bigram LEARN-ρ 54.53 72.55 (2.3%) GOLD-ρ 54.51 73.18 NO-ρ 54.61 70.12 NO-VAR 54.12 73.99 Table 6: Word segmentation F-scores for the /t/recovery F-scores in Table 5 averaged over two runs (standard errors less than 2% unless given). NO-ρ are scores for running just the word segmentation model with no /t/-deletion rule on the data that includes /t/-deletion, NO-VAR for running just the word segmentation model on the data with no /t/-deletions. For example, /t/ dropping in “don’t you” yields surface forms “don you”. Because the word bigram probability P(you | don’t) is high, the bigram model prefers to analyse surface “don” as underlying “don’t”. The Unigram model does not have access to word bigram information so the underlying forms it posits are less accurate (as shown in Table 2), and hence the estimate of the /t/-deletion probability is also less accurate. When the probabilities of deletion are pre-specified the Unigram model performs better but still considerably worse than the Bigram model when the word boundaries are known, suggesting the importance of non-phonological contextual effects that the Bigram model but not the Unigram model can capture. This suggests that for example word predictability in context might be an important factor contributing to /t/-deletion. The other striking finding is the considerable drop in performance between running on naturalistic and artificially created data. This suggests that the natural distribution of /t/-deletion is much more complex than can be captured by statistics over the phonological contexts we examined. Following Guy (1991), a finer-grained distinction for the preceeding segments might address this problem. Yet another suggestion comes from the recent work in Coetzee and Kawahara (2013) who claim that “[a] model that accounts perfectly for the overall rate of application of some variable process therefore does not necessarily account very well for the actual application of the process to individual words.” They argue that in particular the extremely high deletion rates typical of high frequency items aren’t accurately captured when the deletion probability is estimated across all types. A look at the error patterns of our model on a sample from the Bigram model in the LEARN-ρ setting on the naturalistic data suggests that this is in fact a problem. For example, the word “just” has an extremely high rate of deletion with 1746 2442 = 0.71%. While many tokens of “jus” are “explained away” through predicting underlying /t/s, the (literally) extra-ordinary frequency of “jus”-tokens lets our model still posit it as an underlying form, although with a much dampened frequency (of the 1746 surface tokens, 1081 are analysed as being realizations of an underlying “just”). The /t/-recovery performance drop when performing joint word segmentation isn’t surprising as even the Bigram model doesn’t deliver a very high-quality segmentation to begin with, leading to both sparsity (through missed word-boundaries) and potential noise (through misplaced wordboundaries). Using a more realistic generative process for the underlying forms, for example an Adaptor Grammar (Johnson et al., 2007), could address this shortcoming in future work without changing the overall architecture of the model although novel inference algorithms might be required. 6 Conclusion and outlook We presented a joint model for word segmentation and the learning of phonological rule probabilities from a corpus of transcribed speech. We find that our Bigram model reaches 77% /t/-recovery F-score when run with knowledge of true wordboundaries and when it can make use of both the preceeding and the following phonological context, and that unlike the Unigram model it is able to learn the probability of /t/-deletion in different contexts. When performing joint word segmentation on the Buckeye corpus, our Bigram model reaches around above 55% F-score for recovering deleted /t/s with a word segmentation F-score of around 72% which is 2% better than running a Bigram model that does not model /t/-deletion. We identified additional factors that might help handling /t/-deletion and similar phenomena. A major advantage of our generative model is the ease and transparency with which its assumptions can be modified and extended. For future work we plan to incorporate into our model richer phonological contexts, item- and frequencyspecific probabilities and more direct use of word 1515 predictability. We also plan to extend our model to handle additional phenomena, an obvious candidate being /d/-deletion. Also, the two-level architecture we present is not limited to the mapping being defined in terms of rules rather than constraints in the spirit of Optimality Theory (Prince and Smolensky, 2004); we plan to explore this alternative path as well in future work. To conclude, we presented a model that provides a clean framework to test the usefulness of different factors for word segmentation and handling phonological variation in a controlled manner. Acknowledgements We thank the anonymous reviewers for their valuable comments. This research was supported under Australian Research Council’s Discovery Projects funding scheme (project numbers DP110102506 and DP110102593). References Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Haper & Row, New York. Andries W. Coetzee and Shigeto Kawahara. 2013. Frequency biases in phonological variation. Natural Language and Linguisic Theory, 31:47–89. Andries W. Coetzee. 2004. What it Means to be a Loser: Non-Optimal Candidates in Optimality Theory. Ph.D. thesis, University of Massachusetts , Amherst. Laura Dilley, Amanda Millett, J. Devin McAuley, and Tonya R. Bergeson. to appear. Phonetic variation in consonants in infant-directed and adult-directed speech: The case of regressive place assimilation in word-final alveolar stops. Journal of Child Language. Micha Elsner, Sharon Goldwater, and Jacob Eisenstein. 2012. Bootstrapping a unified model of lexical and phonetic acquisition. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 184–193, Jeju Island, Korea. Association for Computational Linguistics. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2009. A Bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21–54. Gregory R. Guy. 1991. Contextual conditioning in variable lexical phonology. Language Variation and Change, 3(2):223–39. Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317–325, Boulder, Colorado, June. Association for Computational Linguistics. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641–648. MIT Press, Cambridge, MA. Mark Johnson. 2008. Using Adaptor Grammars to identify synergies in the unsupervised acquisition of linguistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, pages 398–406, Columbus, Ohio. Association for Computational Linguistics. Jason Naradowsky and Sharon Goldwater. 2009. Improving morphology induction by learning spelling rules. In Proceedings of the 21st international jont conference on Artifical intelligence, pages 1531– 1536, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Radford M. Neal. 2003. Slice sampling. Annals of Statistics, 31:705–767. Dennis Norris, James M. Mcqueen, Anne Cutler, and Sally Butterfield. 1997. The possible-word constraint in the segmentation of continuous speech. Cognitive Psychology, 34(3):191 – 243. Mark A. Pitt, Laura Dilley, Keith Johnson, Scott Kiesling, William Raymond, Elizabeth Hume, and Eric Fosler-Lussier. 2007. Buckeye corpus of conversational speech. Alan Prince and Paul Smolensky. 2004. Optimality Theory: Constraint Interaction in Generative Grammar. Blackwell. Yee Whye Teh, Michael Jordan, Matthew Beal, and David Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581. 1516
2013
148
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1517–1526, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Compositional-ly Derived Representations of Morphologically Complex Words in Distributional Semantics Angeliki Lazaridou and Marco Marelli and Roberto Zamparelli and Marco Baroni Center for Mind/Brain Sciences (University of Trento, Italy) [email protected] Abstract Speakers of a language can construct an unlimited number of new words through morphological derivation. This is a major cause of data sparseness for corpus-based approaches to lexical semantics, such as distributional semantic models of word meaning. We adapt compositional methods originally developed for phrases to the task of deriving the distributional meaning of morphologically complex words from their parts. Semantic representations constructed in this way beat a strong baseline and can be of higher quality than representations directly constructed from corpus data. Our results constitute a novel evaluation of the proposed composition methods, in which the full additive model achieves the best performance, and demonstrate the usefulness of a compositional morphology component in distributional semantics. 1 Introduction Effective ways to represent word meaning are needed in many branches of natural language processing. In the last decades, corpus-based methods have achieved some degree of success in modeling lexical semantics. Distributional semantic models (DSMs) in particular represent the meaning of a word by a vector, the dimensions of which encode corpus-extracted co-occurrence statistics, under the assumption that words that are semantically similar will occur in similar contexts (Turney and Pantel, 2010). Reliable distributional vectors can only be extracted for words that occur in many contexts in the corpus. Not surprisingly, there is a strong correlation between word frequency and vector quality (Bullinaria and Levy, 2007), and since most words occur only once even in very large corpora (Baroni, 2009), DSMs suffer data sparseness. While word rarity has many sources, one of the most common and systematic ones is the high productivity of morphological derivation processes, whereby an unlimited number of new words can be constructed by adding affixes to existing stems (Baayen, 2005; Bauer, 2001; Plag, 1999).1 For example, in the multi-billion-word corpus we introduce below, perfectly reasonable derived forms such as lexicalizable or affixless never occur. Even without considering the theoretically infinite number of possible derived nonce words, and restricting ourselves instead to words that are already listed in dictionaries, complex forms cover a high portion of the lexicon. For example, morphologically complex forms account for 55% of the lemmas in the CELEX English database (see Section 4.1 below). In most of these cases (80% according to our corpus) the stem is more frequent than the complex form (e.g., the stem build occurs 15 times more often than the derived form rebuild, and the latter is certainly not an unusual derived form). DSMs ignore derivational morphology altogether. Consequently, they cannot provide meaning representations for new derived forms, nor can they harness the systematic relation existing between stems and derivations (any English speaker can infer that to rebuild is to build again, whether they are familiar with the prefixed form or not) in order to mitigate derived-form sparseness problems. A simple way to handle derivational mor1Morphological derivation constructs new words (in the sense of lemmas) from existing lexical items (resource+ful→resourceful). In this work, we do not treat inflectional morphology, pertaining to affixes that encode grammatical features such as number or tense (dog+s). We use morpheme for any component of a word (resource and -ful are both morphemes). We use stem for the lexical item that constitutes the base of derivation (resource) and affix (prefix or suffix) for the element attached to the stem to derive the new form (-ful). In English, stems are typically independent words, affixes bound morphemes, i.e., they cannot stand alone. Note that a stem can in turn be morphologically derived, e.g., point+less in pointless+ly. Finally, we use morphologically complex as synonymous with derived. 1517 phology would be to identify the stem of rare derived words and use its distributional vector as a proxy to derived-form meaning.2 The meaning of rebuild is not that far from that of build, so the latter might provide a reasonable surrogate. Still, something is clearly lost (if the author of a text felt the need to use the derived form, the stem was not fully appropriate), and sometimes the jump in meaning can be quite dramatic (resourceless and resource mean very different things!). In the past few years there has been much interest in how DSMs can scale up to represent the meaning of larger chunks of text such as phrases or even sentences. Trying to represent the meaning of arbitrarily long constructions by directly collecting co-occurrence statistics is obviously ineffective and thus methods have been developed to derive the meaning of larger constructions as a function of the meaning of their constituents (Baroni and Zamparelli, 2010; Coecke et al., 2010; Mitchell and Lapata, 2008; Mitchell and Lapata, 2010; Socher et al., 2012). Compositional distributional semantic models (cDSMs) of word units aim at handling, compositionally, the high productivity of phrases and consequent data sparseness. It is natural to hypothesize that the same methods can be applied to morphology to derive the meaning of complex words from the meaning of their parts: For example, instead of harvesting a rebuild vector directly from the corpus, the latter could be constructed from the distributional representations of re- and build. Besides alleviating data sparseness problems, a system of this sort, that automatically induces the semantic contents of morphological processes, would also be of tremendous theoretical interest, given that the semantics of derivation is a central and challenging topic in linguistic morphology (Dowty, 1979; Lieber, 2004). In this paper, we explore, for the first time (except for the proof-of-concept study in Guevara (2009)), the application of cDSMs to derivational morphology. We adapt a number of composition methods from the literature to the morphological setting, and we show that some of these methods can provide better distributional representations of derived forms than either those directly harvested from a large corpus, or those obtained by using the stem as a proxy to derived-form meaning. Our 2Of course, spotting and segmenting complex words is a big research topic unto itself (Beesley and Karttunen, 2000; Black et al., 1991; Sproat, 1992), and one we completely sidestep here. results suggest that exploiting morphology could improve the quality of DSMs in general, extend the range of tasks that cDSMs can successfully model and support the development of new ways to test their performance. 2 Related work Morphological induction systems use corpusbased methods to decide if two words are morphologically related and/or to segment words into morphemes (Dreyer and Eisner, 2011; Goldsmith, 2001; Goldwater and McClosky, 2005; Goldwater, 2006; Naradowsky and Goldwater, 2009; Wicentowski, 2004). Morphological induction has recently received considerable attention since morphological analysis can mitigate data sparseness in domains such as parsing and machine translation (Goldberg and Tsarfaty, 2008; Lee, 2004). Among the cues that have been exploited there is distributional similarity among morphologically related words (Schone and Jurafsky, 2000; Yarowsky and Wicentowski, 2000). Our work, however, differs substantially from this track of research. We do not aim at segmenting morphological complex words or identifying paradigms. Our goal is to automatically construct, given distributional representations of stems and affixes, semantic representations for the derived words containing those stems and affixes. A morphological induction system, given rebuild, will segment it into re- and build (possibly using distributional similarity between the words as a cue). Our system, given re- and build, predicts the (distributional semantic) meaning of rebuild. Another emerging line of research uses distributional semantics to model human intuitions about the semantic transparency of morphologically derived or compound expressions and how these impact various lexical processing tasks (Kuperman, 2009; Wang et al., 2012). Although these works exploit vectors representing complex forms, they do not attempt to generate them compositionally. The only similar study we are aware of is that of Guevara (2009). Guevara found a systematic geometric relation between corpus-based vectors of derived forms sharing an affix and their stems, and used this finding to motivate the composition method we term lexfunc below. However, unlike us, he did not test alternative models, and he only presented a qualitative analysis of the trajectories triggered by composition with various affixes. 1518 3 Composition methods Distributional semantic models (DSMs), also known as vector-space models, semantic spaces, or by the names of famous incarnations such as Latent Semantic Analysis or Topic Models, approximate the meaning of words with vectors that record their patterns of co-occurrence with corpus context features (often, other words). There is an extensive literature on how to develop such models and on their evaluation. Recent surveys include Clark (2012), Erk (2012) and Turney and Pantel (2010). We focus here on compositional DSMs (cDSMs). Since the very inception of distributional semantics, there have been attempts to compose meanings for sentences and larger passages (Landauer and Dumais, 1997), but interest in compositional DSMs has skyrocketed in the last few years, particularly since the influential work of Mitchell and Lapata (2008; 2009; 2010). For the current study, we have reimplemented and adapted to the morphological setting all cDSMs we are aware of, excluding the tensorproduct-based models that Mitchell and Lapata (2010) have shown to be empirically disappointing and the models of Socher and colleagues (Socher et al., 2011; Socher et al., 2012), that require complex optimization procedures whose adaptation to morphology we leave to future work. Mitchell and Lapata proposed a set of simple and effective models in which the composed vectors are obtained through component-wise operations on the constituent vectors. Given input vectors u and v, the multiplicative model (mult) returns a composed vector c with: ci = uivi. In the weighted additive model (wadd), the composed vector is a weighted sum of the two input vectors: c = αu + βv, where α and β are two scalars. In the dilation model, the output vector is obtained by first decomposing one of the input vectors, say v, into a vector parallel to u and an orthogonal vector. Following this, the parallel vector is dilated by a factor λ before re-combining. This results in: c = (λ −1)⟨u, v⟩u + ⟨u, u⟩v. Guevara (2010) and Zanzotto et al. (2010) propose the full additive model (fulladd), where the two vectors to be added are pre-multiplied by weight matrices: c = Au + Bv Since the Mitchell and Lapata and fulladd models were developed for phrase composition, the two input vectors were taken to be, very straightforwardly, the vectors of the two words to be composed into the phrase of interest. In morphological derivation, at least one of the items to be composed (the affix) is a bound morpheme. In our adaptation of these composition models, we build bound morpheme vectors by accumulating the contexts in which a set of derived words containing the relevant morphemes occur, e.g., the re- vector aggregates co-occurrences of redo, remake, retry, etc. Baroni and Zamparelli (2010) and Coecke et al. (2010) take inspiration from formal semantics to characterize composition in terms of function application, where the distributional representation of one element in a composition (the functor) is not a vector but a function. Given that linear functions can be expressed by matrices and their application by matrix-by-vector multiplication, in this lexical function (lexfunc) model, the functor is represented by a matrix U to be multiplied with the argument vector v: c = Uv. In the case of morphology, it is natural to treat bound affixes as functions over stems, since affixes encode the systematic semantic patterns we intend to capture. Unlike the other composition methods, lexfunc does not require the construction of distributional vectors for affixes. A matrix representation for every affix is instead induced directly from examples of stems and the corresponding derived forms, in line with the intuition that every affix corresponds to a different pattern of change of the stem meaning. Finally, as already discussed in the Introduction, performing no composition at all but using the stem vector as a surrogate of the derived form is a reasonable strategy. We saw that morphologically derived words tend to appear less frequently than their stems, and in many cases the meanings are close. Consequently, we expect a stem-only “composition” method to be a strong baseline in the morphological setting. 4 Experimental setup 4.1 Morphological data We obtained a list of stem/derived-form pairs from the CELEX English Lexical Database, a widely used 100K-lemma lexicon containing, among other things, information about the derivational structure of words (Baayen et al., 1995). For each derivational affix present in CELEX, we extracted from the database the full list of stem/derived pairs matching its most common part-of-speech signature (e.g., for -er we only considered pairs 1519 Affix Stem/Der. Training HQ/Tot. Avg. POS Items Test Items SDR -able verb/adj 177 30/50 5.96 -al noun/adj 245 41/50 5.88 -er verb/noun 824 33/50 5.51 -ful noun/adj 53 42/50 6.11 -ic noun/adj 280 43/50 5.99 -ion verb/noun 637 38/50 6.22 -ist noun/noun 244 38/50 6.16 -ity adj/noun 372 33/50 6.19 -ize noun/verb 105 40/50 5.96 -less noun/adj 122 35/50 3.72 -ly adj/adv 1847 20/50 6.33 -ment verb/noun 165 38/50 6.06 -ness adj/noun 602 33/50 6.29 -ous noun/adj 157 35/50 5.94 -y noun/adj 404 27/50 5.25 inadj/adj 101 34/50 3.39 reverb/verb 86 27/50 5.28 unadj/adj 128 36/50 3.23 tot */* 6549 623/900 5.52 Table 1: Derivational morphology dataset having a verbal stem and nominal derived form). Since CELEX was populated by semi-automated morphological analysis, it includes forms that are probably not synchronically related to their stems, such as crypt+ic or re+form. However, we did not manually intervene on the pairs, since we are interested in training and testing our methods in realistic, noisy conditions. In particular, the need to pre-process corpora to determine which forms are “opaque”, and should thus be bypassed by our systems, would greatly reduce their usefulness. Pairs in which either word occurred less than 20 times in our source corpus (described in Section 4.2 below) were filtered out and, in our final dataset, we only considered the 18 affixes (3 prefixes and 15 suffixes) with at least 100 pairs meeting this condition. We randomly chose 50 stem/derived pairs (900 in total) as test data. The remaining data were used as training items to estimate the parameters of the composition methods. Table 1 summarizes various characteristics of the dataset3 (the last two columns of the table are explained in the next paragraphs). Annotation of quality of test vectors The quality of the corpus-based vectors representing derived test items was determined by collecting human semantic similarity judgments in a crowdsourcing survey. In particular, we use the similarity of a vector to its nearest neighbors (NNs) as a proxy measure of quality. The underlying assump3Available from http://clic.cimec.unitn.it/ composes tion is that a vector, in order to be a good representation of the meaning of the corresponding word, should lie in a region of semantic space populated by intuitively similar meanings, e.g., we are more likely to have captured the meaning of car if the NN of its vector is the automobile vector rather than potato. Therefore, to measure the quality of a given vector, we can look at the average similarity score provided by humans when comparing this very vector with its own NNs. All 900 derived vectors from the test set were matched with their three closest NNs in our semantic space (see Section 4.2), thus producing a set of 2, 700 word pairs. These pairs were administered to CrowdFlower users,4 who were asked to judge the relatedness of the two meanings on a 7-point scale (higher for more related). In order to ensure that participants were committed to the task and exclude non-proficient English speakers, we used 60 control pairs as gold standard, consisting of either perfect synonyms or completely unrelated words. We obtained 30 judgments for each derived form (10 judgments for each of 3 neighbor comparisons), with mean participant agreement of 58%. These ratings were averaged item-wise, resulting in a Gaussian distribution with a mean of 3.79 and a standard deviation of 1.31. Finally, each test item was marked as high-quality (HQ) if its derived form received an average score of at least 3, as low-quality (LQ) otherwise. Table 1 reports the proportion of HQ test items for each affix, and Table 2 reports some examples of HQ and LQ items with the corresponding NNs. It is worth observing that the NNs of the LQ items, while not as relevant as the HQ ones, are hardly random. Annotation of similarity between stem and derived forms Derived forms differ in terms of how far their meaning is with respect to that of their stem. Certain morphological processes have systematically more impact than others on meaning: For example, the adjectival prefix in- negates the meaning of the stem, whereas -ly has the sole function to convert an adjective into an adverb. But the very same affix can affect different stems in different ways. For example, remelt means little more than to melt again, but rethink has subtler implications of changing one’s way to look at a problem, and while one of the senses of cycling is present in recycle, it takes some effort to see their relation. 4http://www.crowdflower.com 1520 Affix Type Derived form Neighbors -ist HQ transcendentalist mythologist, futurist, theosophist LQ florist Harrod, wholesaler, stockist -ity HQ publicity publicise, press, publicize LQ sparsity dissimilarity, contiguity, perceptibility -ment HQ advertisement advert, promotional, advertising LQ inducement litigant, contractually, voluntarily inHQ inaccurate misleading, incorrect, erroneous LQ inoperable metastasis, colorectal, biopsy reHQ recapture retake, besiege, capture LQ rename defunct, officially, merge Table 2: Examples of HQ and LQ derived vectors with their NNs We conducted a separate crowdsourcing study where participants were asked to rate the 900 test stem/derived pairs for the strength of their semantic relationship on a 7-point scale. We followed a procedure similar to the one described for quality measurement; 7 judgments were collected for each pair. Participants’ agreement was at 60%. The last column of Table 1 reports the average stem/derived relatedness (SDR) for the various affixes. Note that the affixes with systematically lower SDR are those carrying a negative meaning (in-, un-, -less), whereas those with highest SDR do little more than changing the POS of the stem (-ion, -ly, ness). Among specific pairs with very low relatedness we encounter hand/handy, bear/bearable and active/activist, whereas compulsory/compulsorily, shameless/shamelessness and chaos/chaotic have high SDR. Since the distribution of the average ratings was negatively skewed (mean rating: 5.52, standard deviation: 1.26),5 we took 5 as the rating threshold to classify items as having high (HR) or low (LR) relatedness to their stems. 4.2 Distributional semantic space6 We use as our source corpus the concatenation of ukWaC, the English Wikipedia (2009 dump) and the BNC,7 for a total of about 2.8 billion tokens. We collect co-occurrence statistics for the top 20K content words (adjectives, adverbs, nouns, verbs) 5The negative skew is not surprising, as derived forms must have some relation to their stems! 6Most steps of the semantic space construction and composition pipelines were implemented using the DISSECT toolkit: https://github.com/ composes-toolkit/dissect. 7http://wacky.sslmit.unibo.it, http: //en.wikipedia.org, http://www.natcorp. ox.ac.uk in lemma format, plus any item from the morphological dataset described above that was below this rank. The top 20K content words also constitute our context elements. We use a standard bag-of-words approach, counting collocates in a narrow 2-word before-and-after window. We apply (non-negative) Pointwise Mutual Information as weighting scheme and dimensionality reduction by Non-negative Matrix Factorization, setting the number of reduced-space dimensions to 350. These settings are chosen without tuning, and are based on previous experiments where they produced high-quality semantic spaces (Boleda et al., 2013; Bullinaria and Levy, 2007). 4.3 Implementation of composition methods All composition methods except mult and stem have weights to be estimated (e.g., the λ parameter of dilation or the affix matrices of lexfunc). We adopt the estimation strategy proposed by Guevara (2010) and Baroni and Zamparelli (2010), namely we pick parameter values that optimize the mapping between stem and derived vectors directly extracted from the corpus. To learn, say, a lexfunc matrix representing the prefix re-, we extract vectors of V/reV pairs that occur with sufficient frequency (visit/revisit, think/rethink...). We then use least-squares methods to find weights for the re- matrix that minimize the distance between each reV vector generated by the model given the input V and the corresponding corpus-observed derived vector (e.g., we try to make the modelpredicted re+visit vector as similar as possible to the corpus-extracted one). This is a general estimation approach that does not require taskspecific hand-labeled data, and for which simple analytical solutions of the least-squares error prob1521 lem exist for all our composition methods. We use only the training items from Section 4.1 for estimation. Note that, unlike the test items, these have not been annotated for quality, so we are adopting an unsupervised (no manual labeling) but noisy estimation method.8 For the lexfunc model, we use the training items separately to obtain weight matrices representing each affix, whereas for the other models all training data are used together to globally derive single sets of affix and stem weights. For the wadd model, the learning process results in 0.16×affix+0.33×stem, i.e., the affix contributes only half of its mass to the composition of the derived form. For dilation, we stretch the stem (i.e., v of the dilation equation is the stem vector), since it should provide richer contents than the affix to the derived meaning. We found that, on average across the training pairs, dilation weighted the stem 20 times more heavily than the affix (0.05×affix+1×stem). We then expect that the dilation model will have similar performance to the baseline stem model, as confirmed below.9 For all methods, vectors were normalized before composing both in training and in generation. 5 Experiment 1: approximating high-quality corpus-extracted vectors The first experiment investigates to what extent composition models can approximate high-quality (HQ) corpus-extracted vectors representing derived forms. Note that since the test items were excluded from training, we are simulating a scenario in which composition models must generate representations for nonce derived forms. Cosine similarity between model-generated and corpus-extracted vectors were computed for all models, including the stem baseline (i.e., cosine between stem and derived form). The first row of Table 3 reports mean similarities. The stem method sets the level of performance relatively high, confirming its soundness. Indeed, the parameter-free mult model performs below the baseline.10 As expected, dilation performs simi8More accurately, we relied on semi-manual CELEX information to identify derived forms. A further step towards a fully knowledge-free system would be to pre-process the corpus with an unsupervised morphological induction system to extract stem/derived pairs. 9The other models have thousands of weights to be estimated, so we cannot summarize the outcome of parameter estimation here. 10This result does not necessarily contradict those of stem mult dil. wadd fulladd lexfunc All 0.47 0.39 0.48 0.50 0.56 0.54 HR 0.52 0.43 0.53 0.55 0.61 0.58 LR 0.32 0.28 0.33 0.38 0.41 0.42 Table 3: Mean similarity of composed vectors to high-quality corpus-extracted derived-form vectors, for all as well as high- (HR) and lowrelatedness (LR) test items larly to the baseline, while wadd outperforms it, although the effect does not reach significance (p=.06).11 Both fulladd and lexfunc perform significantly better than stem (p < .001). Lexfunc provides a flexible way to account for affixation, since it models it directly as a function mapping from and onto word vectors, without requiring a vector representation of bound affixes. The reason at the base of its good performance is thus quite straightforward. On the other hand, it is surprising that a simple representation of bound affixes (i.e., as vectors aggregating the contexts of words containing them) can work so well, at least when used in conjunction with the granular dimension-by-dimension weights assigned by the fulladd method. We hypothesize that these aggregated contexts, by providing information about the set of stems an affix combines with, capture the shared semantic features that the affix operates on. When the meaning of the derived form is far from that of its stem, the stem baseline should no longer constitute a suitable surrogate of derivedform meaning. The LR cases (see Section 4.1 above) are thus crucial to understand how well composition methods capture not only stem meaning, but also affix-triggered semantics. The HR and LR rows of Table 3 present the results for the respective test subsets. As expected, the stem approach undergoes a strong drop when performance is measured on LR items. At the other extreme, fulladd and lexfunc, while also finding the LR cases more difficult, still clearly outperform the baseline (p<.001), confirming that they capture the meaning of derived forms beyond what their stems contribute to it. The effect of wadd, again, approaches significance when compared to the baseline (p = .05). Very encouragingly, both Mitchell and Lapata and others who found mult to be highly competitive. Due to differences in co-occurrence weighting schemes (we use a logarithmically scaled measure, they do not), their multiplicative model is closer to our additive one. 11Significance assessed by means of Tukey Honestly Significant Difference tests (Abdi and Williams, 2010) 1522 stem mult wadd dil. fulladd lexfunc -less 0.22 0.23 0.30 0.24 0.38 0.44 in0.39 0.34 0.45 0.40 0.47 0.45 un0.33 0.33 0.41 0.34 0.44 0.46 Table 4: Mean similarity of composed vectors to high-quality corpus-extracted derived-form vectors with negative affixes fulladd and lexfunc significantly outperform stem also in the HR subset (p<.001). That is, the models provide better approximations of derived forms even when the stem itself should already be a good surrogate. The difference between the two models is not significant. We noted in Section 4.1 that forms containing the “negative” affixes -less, un- and in- received on average low SDR scores, since negation impacts meaning more drastically than other operations. Table 4 reports the performance of the models on these affixes. Indeed, the stem baseline performs quite poorly, whereas fulladd, lexfunc and, to a lesser extent, wadd are quite effective in this condition as well, all performing greatly above the baseline. These results are intriguing in light of the fact that modeling negation is a challenging task for DSMs (Mohammad et al., 2013) as well as cDSMs (Preller and Sadrzadeh, 2011). To the extent that our best methods have captured the negating function of a prefix such as in-, they might be applied to tasks such as recognizing lexical opposites, or even simple forms of syntactic negation (modeling inoperable is just a short step away from modeling not operable compositionally). 6 Experiment 2: Comparing the quality of corpus-extracted and compositionally generated words The first experiment simulated the scenario in which derived forms are not in our corpus, so that directly extracting their representation from it is not an option. The second experiment tests if compositionally-derived representations can be better than those extracted directly from the corpus when the latter is a possible strategy (i.e., the derived forms are attested in the source corpus). To this purpose, we focused on those 277 test items that were judged as low-quality (LQ, see Section 4.1), which are presumably more challenging to generate, and where the compositional route could be most useful. We evaluated the derived forms generated by corpus stem wadd fulladd lexfunc All 2.28 3.26 4.12 3.99 3.09 HR 2.29 3.56 4.48 4.31 3.31 LR 2.22 2.48 3.14 3.12 2.52 Table 5: Average quality ratings of derived vectors Target Model Neighbors florist wadd flora, fauna, ecosystem fulladd flora, fauna, egologist lexfunc ornithologist, naturalist, botanist sparsity wadd sparse, sparsely, dense fulladd sparse, sparseness, angularity lexfunc fragility, angularity, smallness inducement wadd induce, inhibit, inhibition fulladd induce, inhibition, mediate lexfunc impairment, cerebral, ocular inoperable wadd operable, palliation, biopsy fulladd operable, inoperative, ventilator lexfunc inoperative, unavoidably, flaw rename wadd name, later, namesake fulladd name, namesake, later lexfunc temporarily, reinstate, thereafter Table 6: Examples of model-predicted neighbors for words with LQ corpus-extracted vectors the models that performed best in the first experiment (fulladd, lexfunc and wadd), as well as the stem baseline, by means of another crowdsourcing study. We followed the same procedure used to assess the quality of corpus-extracted vectors, that is, we asked judges to rate the relatedness of the target forms to their NNs (we obtained on average 29 responses per form). The first line of Table 5 reports the average quality (on a 7-point scale) of the representations of the derived forms as produced by the models and baseline, as well as of the corpus-harvested ones (corpus column). All compositional models produce representations that are of significantly higher quality (p < .001) than the corpus-based ones. The effect is also evident in qualitative terms. Table 6 presents the NNs predicted by the three compositional methods for the same LQ test items whose corpus-based NNs are presented in Table 2. These results indicate that morpheme composition is an effective solution when the quality of corpus-extracted derived forms is low (and the previous experiment showed that, when their quality is high, composition can at least approximate corpus-based vectors). With respect to Experiment 1, we obtain a different ranking of the models, with lexfunc being outperformed by both wadd and fulladd (p<.001), that are statistically indistinguishable. The wadd 1523 composition is dominated by the stem, and by looking at the examples in Table 6 we notice that both this model and fulladd tend to feature the stem as NN (100% of the cases for wadd, 73% for fulladd in the complete test set). The question thus arises as to whether the good performance of these composition techniques is simply due to the fact that they produce derived forms that are near their stems, with no added semantic value from the affix (a “stemploitation” strategy). However, the stemploitation hypothesis is dispelled by the observation that both models significantly outperform the stem baseline (p<.001), despite the fact that the latter, again, has good performance, significantly outperforming the corpusderived vectors (p < .001). Thus, we confirm that compositional models provide higher quality vectors that are capturing the meaning of derived forms beyond the information provided by the stem. Indeed, if we focus on the third row of Table 5, reporting performance on low stem-derived relatedness (LR) items (annotated as described in Section 4.1), fulladd and wadd still significantly outperform the corpus representations (p<.001), whereas the quality of the stem representations of LR items is not significantly different form that of the corpus-derived ones. Interestingly, lexfunc displays the smallest drop in performance when restricting evaluation to LR items; however, since it does not significantly outperform the LQ corpus representations, this is arguably due to a floor effect. 7 Conclusion and future work We investigated to what extent cDSMs can generate effective meaning representations of complex words through morpheme composition. Several state-of-the-art composition models were adapted and evaluated on this novel task. Our results suggest that morpheme composition can indeed provide high-quality vectors for complex forms, improving both on vectors directly extracted from the corpus and on a stem-backoff strategy. This result is of practical importance for distributional semantics, as it paves the way to address one of the main causes of data sparseness, and it confirms the usefulness of the compositional approach in a new domain. Overall, fulladd emerged as the best performing model, with both lexfunc and the simple wadd approach constituting strong rivals. The effectiveness of the best models extended also to the challenging cases where the meaning of derived forms is far from that of the stem, including negative affixes. The fulladd method requires a vector representation for bound morphemes. A first direction for future work will thus be to investigate which aspects of the meaning of bound morphemes are captured by our current simple-minded approach to populating their vectors, and to explore alternative ways to construct them, seeing if they further improve fulladd performance. A natural extension of our research is to address morpheme composition and morphological induction jointly, trying to model the intuition that good candidate morphemes should have coherent semantic representations. Relatedly, in the current setting we generate complex forms from their parts. We want to investigate the inverse route, namely “de-composing” complex words to derive representations of their stems, especially for cases where the complex words are more frequent (e.g. comfort/comfortable). We would also like to apply composition to inflectional morphology (that currently lies outside the scope of distributional semantics), to capture the nuances of meaning that, for example, distinguish singular and plural nouns (consider, e.g., the difference between the mass singular tea and the plural teas, which coerces the noun into a count interpretation (Katz and Zamparelli, 2012)). Finally, in our current setup we focus on a single composition step, e.g., we derive the meaning of inoperable by composing the morphemes in- and operable. But operable is in turn composed of operate and -able. In the future, we will explore recursive morpheme composition, especially since we would like to apply these methods to more complex morphological systems (e.g., agglutinative languages) where multiple morphemes are the norm. 8 Acknowledgments We thank Georgiana Dinu and Nghia The Pham for helping out with DISSECT-ion and the reviewers for helpful feedback. This research was supported by the ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). 1524 References Herv´e Abdi and Lynne Williams. 2010. NewmanKeuls and Tukey test. In Neil Salkind, Bruce Frey, and Dondald Dougherty, editors, Encyclopedia of Research Design, pages 897–904. Sage, Thousand Oaks, CA. Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1995. The CELEX lexical database (release 2). CD-ROM, Linguistic Data Consortium, Philadelphia, PA. Harald Baayen. 2005. Morphological productivity. In Rajmund Piotrowski Reinhard K¨ohler, Gabriel Altmann, editor, Quantitative Linguistics: An International Handbook, pages 243–256. Mouton de Gruyter, Berlin, Germany. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP, pages 1183–1193, Boston, MA. Marco Baroni. 2009. Distributions in text. In Anke L¨udeling and Merja Kyt¨o, editors, Corpus Linguistics: An International Handbook, volume 2, pages 803–821. Mouton de Gruyter, Berlin, Germany. Laurie Bauer. 2001. Morphological Productivity. Cambridge University Press, Cambridge, UK. Kenneth Beesley and Lauri Karttunen. 2000. FiniteState Morphology: Xerox Tools and Techniques. Cambridge University Press, Cambridge, UK. Alan Black, Stephen Pulman, Graeme Ritchie, and Graham Russell. 1991. Computational Morphology. MIT Press, Cambrdige, MA. Gemma Boleda, Marco Baroni, Louise McNally, and Nghia Pham. 2013. Intensionality was only alleged: On adjective-noun composition in distributional semantics. In Proceedings of IWCS, pages 35–46, Potsdam, Germany. John Bullinaria and Joseph Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39:510–526. Stephen Clark. 2012. Vector space models of lexical meaning. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics, 2nd edition. Blackwell, Malden, MA. In press. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis, 36:345–384. David Dowty. 1979. Word Meaning and Montague Grammar. Springer, New York. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a Dirichlet process mixture model. In Proceedings of EMNLP, pages 616–627, Edinburgh, UK. Katrin Erk. 2012. Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635–653. Yoav Goldberg and Reut Tsarfaty. 2008. A single generative model for joint morphological segmentation and syntactic parsing. In Proceedings of ACL, pages 371–379, Columbus, OH. John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 2(27):153–198. Sharon Goldwater and David McClosky. 2005. Improving statistical MT through morphological analysis. In Proceedings of EMNLP, pages 676–683, Vancouver, Canada. Sharon Goldwater. 2006. Nonparametric Bayesian Models of Lexical Acquisition. Ph.D. thesis, Brown University. Emiliano Guevara. 2009. Compositionality in distributional semantics: Derivational affixes. In Proceedings of the Words in Action Workshop, Pisa, Italy. Emiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of GEMS, pages 33–37, Uppsala, Sweden. Graham Katz and Roberto Zamparelli. 2012. Quantifying count/mass elasticity. In Proceedings of WCCFL, pages 371–379, Tucson, AR. Victor Kuperman. 2009. Semantic transparency revisited. Presentation at the 6th International Morphological Processing Conference. Thomas Landauer and Susan Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211– 240. Young-Suk Lee. 2004. Morphological analysis for statistical machine translation. In Proceedings of HLTNAACL, pages 57–60, Boston, MA. Rochelle Lieber. 2004. Morphology and Lexical Semantics. Cambridge University Press, Cambridge, UK. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236–244, Columbus, OH. Jeff Mitchell and Mirella Lapata. 2009. Language models based on semantic composition. In Proceedings of EMNLP, pages 430–439, Singapore. 1525 Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Saif Mohammad, Bonnie Dorr, Graeme Hirst, and Peter Turney. 2013. Computing lexical contrast. Computational Linguistics. In press. Jason Naradowsky and Sharon Goldwater. 2009. Improving morphology induction by learning spelling rules. In Proceedings of IJCAI, pages 11–17, Pasadena, CA. Ingo Plag. 1999. Morphological Productivity: Structural Constraints in English Derivation. Mouton de Gruyter, Berlin, Germany. Anne Preller and Mehrnoosh Sadrzadeh. 2011. Bell states and negative sentences in the distributed model of meaning. Electr. Notes Theor. Comput. Sci., 270(2):141–153. Patrick Schone and Daniel Jurafsky. 2000. Knowledge-free induction of morphology using latent semantic analysis. In Proceedings of the ConLL workshop on learning language in logic, pages 67–72, Lisbon, Portugal. Richard Socher, Eric Huang, Jeffrey Pennin, Andrew Ng, and Christopher Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS, pages 801–809, Granada, Spain. Richard Socher, Brody Huval, Christopher Manning, and Andrew Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP, pages 1201–1211, Jeju Island, Korea. Richard Sproat. 1992. Morphology and Computation. MIT Press, Cambrdige, MA. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Hsueh-Cheng Wang, Yi-Min Tien, Li-Chuan Hsu, and Marc Pomplun. 2012. Estimating semantic transparency of constituents of English compounds and two-character Chinese words using Latent Semantic Analysis. In Proceedings of CogSci, pages 2499– 2504, Sapporo, Japan. Richard Wicentowski. 2004. Multilingual noiserobust supervised morphological analysis using the wordframe model. In Proceedings of SIGPHON, pages 70–77, Barcelona, Spain. David Yarowsky and Richard Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proceedings of ACL, pages 207–216, Hong Kong. Fabio Zanzotto, Ioannis Korkontzelos, Francesca Falucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of COLING, pages 1263– 1271, Beijing, China. 1526
2013
149
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 145–154, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Generic binarization for parsing and translation Matthias B¨uchse Technische Universit¨at Dresden [email protected] Alexander Koller University of Potsdam [email protected] Heiko Vogler Technische Universit¨at Dresden [email protected] Abstract Binarization of grammars is crucial for improving the complexity and performance of parsing and translation. We present a versatile binarization algorithm that can be tailored to a number of grammar formalisms by simply varying a formal parameter. We apply our algorithm to binarizing tree-to-string transducers used in syntax-based machine translation. 1 Introduction Binarization amounts to transforming a given grammar into an equivalent grammar of rank 2, i.e., with at most two nonterminals on any righthand side. The ability to binarize grammars is crucial for efficient parsing, because for many grammar formalisms the parsing complexity depends exponentially on the rank of the grammar. It is also critically important for tractable statistical machine translation (SMT). Syntaxbased SMT systems (Chiang, 2007; Graehl et al., 2008) typically use some type of synchronous grammar describing a binary translation relation between strings and/or trees, such as synchronous context-free grammars (SCFGs) (Lewis and Stearns, 1966; Chiang, 2007), synchronous tree-substitution grammars (Eisner, 2003), synchronous tree-adjoining grammars (Nesson et al., 2006; DeNeefe and Knight, 2009), and tree-tostring transducers (Yamada and Knight, 2001; Graehl et al., 2008). These grammars typically have a large number of rules, many of which have rank greater than two. The classical approach to binarization, as known from the Chomsky normal form transformation for context-free grammars (CFGs), proceeds rule by rule. It replaces each rule of rank greater than 2 by an equivalent collection of rules of rank 2. All CFGs can be binarized in this way, which is why their recognition problem is cubic. In the case of linear context-free rewriting systems (LCFRSs, (Weir, 1988)) the rule-by-rule technique also applies to every grammar, as long as an increased fanout it permitted (Rambow and Satta, 1999). There are also grammar formalisms for which the rule-by-rule technique is not complete. In the case of SCFGs, not every grammar has an equivalent representation of rank 2 in the first place (Aho and Ullman, 1969). Even when such a representation exists, it is not always possible to compute it rule by rule. Nevertheless, the rule-by-rule binarization algorithm of Huang et al. (2009) is very useful in practice. In this paper, we offer a generic approach for transferring the rule-by-rule binarization technique to new grammar formalisms. At the core of our approach is a binarization algorithm that can be adapted to a new formalism by changing a parameter at runtime. Thus it only needs to be implemented once, and can then be reused for a variety of formalisms. More specifically, our algorithm requires the user to (i) encode the grammar formalism as a subclass of interpreted regular tree grammars (IRTGs, (Koller and Kuhlmann, 2011)) and (ii) supply a collection of b-rules, which represent equivalence of grammars syntactically. Our algorithm then replaces, in a given grammar, each rule of rank greater than 2 by an equivalent collection of rules of rank 2, if such a collection is licensed by the b-rules. We define completeness of b-rules in a way that ensures that if any equivalent collection of rules of rank 2 exists, the algorithm finds one. As a consequence, the algorithm binarizes every grammar that can be binarized rule by rule. Step (i) is possible for all the grammar formalisms mentioned above. We show Step (ii) for SCFGs and tree-to-string transducers. We will use SCFGs as our running example throughout the paper. We will also apply the algo145 rithm to tree-to-string transducers (Graehl et al., 2008; Galley et al., 2004), which describe relations between strings in one language and parse trees of another, which means that existing methods for binarizing SCFGs and LCFRSs cannot be directly applied to these systems. To our knowledge, our binarization algorithm is the first to binarize such transducers. We illustrate the effectiveness of our system by binarizing a large treeto-string transducer for English-German SMT. Plan of the paper. We start by defining IRTGs in Section 2. In Section 3, we define the general outline of our approach to rule-by-rule binarization for IRTGs, and then extend this to an efficient binarization algorithm based on b-rules in Section 4. In Section 5 we show how to use the algorithm to perform rule-by-rule binarization of SCFGs and tree-to-string transducers, and relate the results to existing work. 2 Interpreted regular tree grammars Grammar formalisms employed in parsing and SMT, such as those mentioned in the introduction, differ in the the derived objects—e.g., strings, trees, and graphs—and the operations involved in the derivation—e.g., concatenation, substitution, and adjoining. Interpreted regular tree grammars (IRTGs) permit a uniform treatment of many of these formalisms. To this end, IRTGs combine two ideas, which we explain here. Algebras IRTGs represent the objects and operations symbolically using terms; the object in question is obtained by interpreting each symbol in the term as a function. As an example, Table 1 shows terms for a string and a tree, together with the denoted object. In the string case, we describe complex strings as concatenation (con2) of elementary symbols (e.g., a, b); in the tree case, we alternate the construction of a sequence of trees (con2) with the construction of a single tree by placing a symbol (e.g., α, β, σ) on top of a (possibly empty) sequence of trees. Whenever a term contains variables, it does not denote an object, but rather a function. In the parlance of universalalgebra theory, we are employing initial-algebra semantics (Goguen et al., 1977). An alphabet is a nonempty finite set. Throughout this paper, let X = {x1, x2, . . . } be a set, whose elements we call variables. We let Xk denote the set {x1, . . . , xk} for every k ≥0. Let Σ be an alphabet and V ⊆X. We write TΣ(V ) for the set of all terms over Σ with variables V , i.e., the smallest set T such that (i) V ⊆T and (ii) for every σ ∈Σ, k ≥0, and t1, . . . , tk ∈T, we have σ(t1, . . . , tk) ∈T. Alternatively, we view TΣ(V ) as the set of all (rooted, labeled, ordered, unranked) trees over Σ and V , and draw them as usual. By TΣ we abbreviate TΣ(∅). The set CΣ(V ) of contexts over Σ and V is the set of all trees over Σ and V in which each variable in V occurs exactly once. A signature is an alphabet Σ where each symbol is equipped with an arity. We write Σ|k for the subset of all k-ary symbols of Σ, and σ|k to denote σ ∈Σ|k. We denote the signature by Σ as well. A signature is binary if the arities do not exceed 2. Whenever we use TΣ(V ) with a signature Σ, we assume that the trees are ranked, i.e., each node labeled by σ ∈Σ|k has exactly k children. Let ∆be a signature. A ∆-algebra A consists of a nonempty set A called the domain and, for each symbol f ∈∆with rank k, a total function fA : Ak →A, the operation associated with f. We can evaluate any term t in T∆(Xk) in A, to obtain a k-ary operation tA over the domain. In particular, terms in T∆evaluate to elements of A. For instance, in the string algebra shown in Table 1, the term con2(a, b) evaluates to ab, and the term con2(con2(x2, a), x1) evaluates to a binary operation f such that, e.g., f(b, c) = cab. Bimorphisms IRTGs separate the finite control (state behavior) of a derivation from its derived object (in its term representation; generational behavior); the former is captured by a regular tree language, while the latter is obtained by applying a tree homomorphism. This idea goes back to the tree bimorphisms of Arnold and Dauchet (1976). Let Σ be a signature. A regular tree grammar (RTG) G over Σ is a triple (Q, q0, R) where Q is a finite set (of states), q0 ∈Q, and R is a finite set of rules of the form q →α(q1, . . . , qk), where q ∈Q, α ∈Σ|k and q, q1, . . . , qk ∈Q. We call α the terminal symbol and k the rank of the rule. Rules of rank greater than two are called suprabinary. For every q ∈Q we define the language Lq(G) derived from q as the set {α(t1, . . . , tk) | q →α(q1, . . . , qk) ∈R, tj ∈ Lqj(G)}. If q = q0, we drop the superscript and write L(G) for the tree language of G. In the literature, there is a definition of RTG which also permits more than one terminal symbol per rule, 146 strings over Γ trees over Γ example term and denoted object con2 a b 7→ab σ con2 α con0 β con0 7→ σ α β domain Γ∗ T ∗ Γ (set of sequences of trees) signature ∆ {a|0 | a ∈Γ} ∪ {γ|1 | γ ∈Γ} ∪ {conk|k | 0 ≤k ≤K, k ̸= 1} {conk|k | 0 ≤k ≤K, k ̸= 1} operations a: () 7→a γ : x1 7→γ(x1) conk : (x1, . . . , xk) 7→x1 · · · xk conk : (x1, . . . , xk) 7→x1 · · · xk Table 1: Algebras for strings and trees, given an alphabet Γ and a maximum arity K ∈N. or none. This does not increase the generative capacity (Brainerd, 1969). A (linear, nondeleting) tree homomorphism is a mapping h: TΣ(X) →T∆(X) that satisfies the following condition: there is a mapping g: Σ → T∆(X) such that (i) g(σ) ∈C∆(Xk) for every σ ∈Σ|k, (ii) h(σ(t1, . . . , tk)) is the tree obtained from g(σ) by replacing the occurrence of xj by h(tj), and (iii) h(xj) = xj. This extends the usual definition of linear and nondeleting homomorphisms (G´ecseg and Steinby, 1997) to trees with variables. We abuse notation and write h(σ) for g(σ) for every σ ∈Σ. Let n ≥1 and ∆1, . . . , ∆n be signatures. A (generalized) bimorphism over (∆1, . . . , ∆n) is a tuple B = (G, h1, . . . , hn) where G is an RTG over some signature Σ and hi is a tree homomorphism from TΣ(X) into T∆i(X). The language L(B) induced by B is the tree relation {(h1(t), . . . , hn(t)) | t ∈L(G)}. An IRTG is a bimorphism whose derived trees are viewed as terms over algebras; see Fig. 1. Formally, an IRTG G over (∆1, . . . , ∆n) is a tuple (B, A1, . . . , An) such that B is a bimorphism over (∆1, . . . , ∆n) and Ai is a ∆i-algebra. The language L(G) induced by G is the relation {(tA1 1 , . . . , tAn n ) | (t1, . . . , tn) ∈L(B)}. We call the trees in L(G) derivation trees and the terms in L(B) semantic terms. We say that two IRTGs G and G′ are equivalent if L(G) = L(G′). IRTGs were first defined in (Koller and Kuhlmann, 2011). For example, Fig. 2 is an IRTG that encodes a synchronous context-free grammar (SCFG). It contains a bimorphism B = (G, h1, h2) consisting of an RTG G with four rules and homomorL(G) T∆1 · · · T∆n A1 · · · An h1 hn (.)A1 (.)An ⊆TΣ bimorphism B = (G, h1, h2) IRTG G = (B, A1, A2) derivation trees semantic terms derived objects Figure 1: IRTG, bimorphism overview. A →α(B, C, D) B →α1, C →α2, D →α3 con3 x1 x2 x3 h1 ←−[ α h2 7−→ con4 x3 a x1 x2 b h1 ←−[ α1 h2 7−→ b c h1 ←−[ α2 h2 7−→ c d h1 ←−[ α3 h2 7−→ d Figure 2: An IRTG encoding an SCFG. phisms h1 and h2 which map derivation trees to trees over the signature of the string algebra in Table 1. By evaluating these trees in the algebra, the symbols con3 and con4 are interpreted as concatenation, and we see that the first rule encodes the SCFG rule A →⟨BCD, DaBC⟩. Figure 3 shows a derivation tree with its two homomorphic images, which evaluate to the strings bcd and dabc. IRTGs can be tailored to the expressive capacity of specific grammar formalisms by selecting suitable algebras. The string algebra in Table 1 yields context-free languages, more complex string al147 con3 b c d h1 ←−[ α α1 α2 α3 h2 7−→ con4 d a b c Figure 3: Derivation tree and semantic terms. A →α′(A′, D) A′ →α′′(B, C) con2 x1 x2 h′ 1 ←−[ α′ h′ 2 7−→ con2 con2 x2 a x1 con2 x1 x2 h′ 1 ←−[ α′′ h′ 2 7−→ con2 x1 x2 Figure 4: Binary rules corresponding to the α-rule in Fig. 2. gebras yield tree-adjoining languages (Koller and Kuhlmann, 2012), and algebras over other domains can yield languages of trees, graphs, or other objects. Furthermore, IRTGs with n = 1 describe languages that are subsets of the algebra’s domain, n = 2 yields synchronous languages or tree transductions, and so on. 3 IRTG binarization We will now show how to apply the rule-by-rule binarization technique to IRTGs. We start in this section by defining the binarization of a rule in an IRTG, and characterizing it in terms of binarization terms and variable trees. We derive the actual binarization algorithm from this in Section 4. For the remainder of this paper, let G = (B, A1, . . . , An) be an IRTG over (∆1, . . . , ∆n) with B = (G, h1, . . . , hn). 3.1 An introductory example We start with an example to give an intuition of our approach. Consider the first rule in Fig. 2, which has rank three. This rule derives (in one step) the fragment α(x1, x2, x3) of the derivation tree in Fig. 3, which is mapped to the semantic terms h1(α) and h2(α) shown in Fig. 2. Now consider the rules in Fig. 4. These rules can be used to derive (in two steps) the derivation tree fragment ξ in Fig. 5e. Note that the terms h′ 1(ξ) and h1(α) are equivalent in that they denote the same function over the string algebra, and so are the terms h′ 2(ξ) and h2(α). Thus, replacing the α-rule by the rules in Fig. 4 does not change the language of the IRTG. However, since the new rules are binary, (a) con3 x1 x2 x3 con4 x3 a x1 x2 (b) con2 x1 con2 x2 x3 con2 con2 x1 x2 x3 t1 : con2 con2 x3 a con2 x1 x2 t2 : con2 con2 x3 con2 a x1 x2 (c) (d) con2 x1 x2 x1 con2 x1 x2 x1 x2 con2 con2 x2 a x1 x1 con2 x1 x2 x1 x2 (e) h1 ←−[ α h2 7−→ {x1, x2, x3} {x1} {x2, x3} {x2} {x3} {x1, x2, x3} {x1, x2} {x1} {x2} {x3} τ : {x1, x2, x3} {x1, x3} {x1} {x3} {x2} con2 con2 x1 x2 x3 t1 : h′ 1 ←−[ α′ α′′ x1 x2 x3 ξ : h′ 2 7−→ con2 con2 x3 a con2 x1 x2 t2 : Figure 5: Outline of the binarization algorithm. parsing and translation will be cheaper. Now we want to construct the binary rules systematically. In the example, we proceed as follows (cf. Fig. 5). For each of the terms h1(α) and h2(α) (Fig. 5a), we consider all terms that satisfy two properties (Fig. 5b): (i) they are equivalent to h1(α) and h2(α), respectively, and (ii) at each node at most two subtrees contain variables. As Fig. 5 suggests, there may be many different terms of this kind. For each of these terms, we analyze the bracketing of variables, obtaining what we call a variable tree (Fig. 5c). Now we pick terms t1 and t2 corresponding to h1(α) and h2(α), respectively, such that (iii) they have the same variable tree, say τ. We construct a tree ξ from τ by a simple relabeling, and we read off the tree homomorphisms h′ 1 and h′ 2 from a decomposition we perform on t1 and t2, respectively; see Fig. 5, dotted arrows, and compare the boxes in Fig. 5d with the homomorphisms in Fig. 4. Now the rules in Fig. 4 are easily extracted from ξ. These rules are equivalent to r because of (i); they are binary because ξ is binary, which in turn holds because of (ii); finally, the decompositions of t1 and t2 are compatible with ξ because of (iii). We call terms t1 and t2 binarization terms if they satisfy (i)–(iii). We will see below that we can con148 struct binary rules equivalent to r from any given sequence of binarization terms t1, t2, and that binarization terms exist whenever equivalent binary rules exist. The majority of this paper revolves around the question of finding binarization terms. Rule-by-rule binarization of IRTGs follows the intuition laid out in this example closely: it means processing each suprabinary rule, attempting to replace it with an equivalent collection of binary rules. 3.2 Binarization terms We will now make this intuition precise. To this end, we assume that r = q →α(q1, . . . , qk) is a suprabinary rule of G. As we have seen, binarizing r boils down to constructing: • a tree ξ over some binary signature Σ′ and • tree homomorphisms h′ 1, . . . , h′ n of type h′ i : TΣ′(X) →T∆i(X), such that h′ i(ξ) and hi(α) are equivalent, i.e., they denote the same function over Ai. We call such a tuple (ξ, h′ 1, . . . , h′ n) a binarization of the rule r. Note that a binarization of r need not exist. The problem of rule-by-rule binarization consists in computing a binarization of each suprabinary rule of a grammar. If such a binarization does not exist, the problem does not have a solution. In order to define variable trees, we assume a mapping seq that maps each finite set U of pairwise disjoint variable sets to a sequence over U which contains each element exactly once. Let t ∈C∆(Xk). The variable set of t is the set of all variables that occur in t. The set S(t) of subtree variables of t consists of the nonempty variable sets of all subtrees of t. We represent S(t) as a tree v(t), which we call variable tree as follows. Any two elements of S(t) are either comparable (with respect to the subset relation) or disjoint. We extend this ordering to a tree structure by ordering disjoint elements via seq. We let v(L) = {v(t) | t ∈L} for every L ⊆C∆(Xk). In the example of Fig. 5, t1 and t2 have the same set of subtree variables; it is {{x1}, {x2}, {x3}, {x1, x2}, {x1, x2, x3}}. If we assume that seq orders sets of variables according to the least variable index, we arrive at the variable tree in the center of Fig. 5. Now let t1 ∈T∆1(Xk), . . . , tn ∈T∆n(Xk). We call the tuple t1, . . . , tn binarization terms of r if the following properties hold: (i) hi(α) and ti are equivalent; (ii) at each node the tree ti contains at most two subtrees with variables; and (iii) the terms t1, . . . , tn have the same variable tree. Assume for now that we have found binarization terms t1, . . . , tn. We show how to construct a binarization (ξ, h′ 1, . . . , h′ n) of r with ti = h′ i(ξ). First, we construct ξ. Since t1, . . . , tn are binarization terms, they have the same variable tree, say, τ. We obtain ξ from τ by replacing every label of the form {xj} with xj, and every other label with a fresh symbol. Because of condition (ii) in in the definition of binarization terms, ξ is binary. In order to construct h′ i(σ) for each symbol σ in ξ, we transform ti into a tree t′ i with labels from C∆i(X) and the same structure as ξ. Then we read off h′ i(σ) from the node of t′ i that corresponds to the σ-labeled node of ξ. The transformation proceeds as illustrated in Fig. 6: first, we apply the maximal decomposition operation ⇝d; it replaces every label f ∈∆i|k by the tree f(x1, . . . , xk), represented as a box. After that, we keep applying the merge operation ⇝m as often as possible; it merges two boxes that are in a parent-child relation, given that one of them has at most one child. Thus the number of variables in any box can only decrease. Finally, the reorder operation ⇝o orders the children of each box according to the seq of their variable sets. These operations do not change the variable tree; one can use this to show that t′ i has the same structure as ξ. Thus, if we can find binarization terms, we can construct a binarization of r. Conversely, for any given binarization (ξ, h′ 1, . . . , h′ n) the semantic terms h′ 1(ξ), . . . , h′ n(ξ) are binarization terms. This proves the following lemma. Lemma 1 There is a binarization of r if and only if there are binarization terms of r. 3.3 Finding binarization terms It remains to show how we can find binarization terms of r, if there are any. Let bi : T∆i(Xk) →P(T∆i(Xk)) the mapping with bi(t) = {t′ ∈T∆i(Xk) | t and t′ are equivalent, and at each node t′ has at most two children with variables}. Figure 5b shows some elements of b1(h1(α)) and b2(h2(α)) for our example. Terms t1, . . . , tn are binarization terms precisely when ti ∈bi(hi(α)) and t1, . . . , tn have the same variable tree. Thus we can characterize binarization terms as follows. Lemma 2 There are binarization terms if and only if T i v(bi(hi(α))) ̸= ∅. 149 con2 con2 x3 a con2 x1 x2 ⇝d con2 x1 x2 con2 x1 x2 x3 a con2 x1 x2 x1 x2 ⇝m con2 x1 x2 con2 x1 a x3 con2 x1 x2 x1 x2 ⇝m con2 con2 x1 a x2 x3 con2 x1 x2 x1 x2 ⇝o con2 con2 x2 a x1 con2 x1 x2 x1 x2 x3 Figure 6: Transforming t2 into t′ 2. This result suggests the following procedure for obtaining binarization terms. First, determine whether the intersection in Lemma 2 is empty. If it is, then there is no binarization of r. Otherwise, select a variable tree τ from this set. We know that there are trees t1, . . . , tn such that ti ∈bi(hi(α)) and v(ti) = τ. We can therefore select arbitrary concrete trees ti ∈bi(hi(α)) ∩v−1(τ). The terms t1, . . . , tn are then binarization terms. 4 Effective IRTG binarization In this section we develop our binarization algorithm. Its key task is finding binarization terms t1, . . . , tn. This task involves deciding term equivalence, as ti must be equivalent to hi(α). In general, equivalence is undecidable, so the task cannot be solved. We avoid deciding equivalence by requiring the user to specify an explicit approximation of bi, which we call a b-rule. This parameter gives rise to a restricted version of the ruleby-rule binarization problem, which is efficiently computable while remaining practically relevant. Let ∆be a signature. A binarization rule (brule) over ∆is a mapping b: ∆→P(T∆(X)) where for every f ∈∆|k we have that b(f) ⊆ C∆(Xk), at each node of a tree in b(f) only two children contain variables, and b(f) is a regular tree language. We extend b to T∆(X) by setting b(xj) = {xj} and b(f(t1, . . . , tk)) = {t[xj/t′ j | 1 ≤j ≤k] | t ∈b(f), t′ j ∈b(tj)}, where [xj/t′ j] denotes substitution of xj by t′ j. Given an algebra A over ∆, a b-rule b over ∆is called a b-rule over A if, for every t ∈T∆(Xk) and t′ ∈b(t), t′ and t are equivalent in A. Such a b-rule encodes equivalence in A, and it does so in an explicit and compact way: because b(f) is a regular tree language, a b-rule can be specified by a finite collection of RTGs, one for each symbol f ∈∆. We will look at examples (for the string and tree algebras shown earlier) in Section 5. From now on, we assume that b1, . . . , bn are b-rules over A1, . . . , An, respectively. A binarization (ξ, h′ 1, . . . , h′ n) of r is a binarization of r with respect to b1, . . . , bn if h′ i(ξ) ∈bi(hi(α)). Likewise, binarization terms t1, . . . , tn are binarization terms with respect to b1, . . . , bn if ti ∈bi(hi(α)). Lemmas 1 and 2 carry over to the restricted notions. The problem of rule-byrule binarization with respect to b1, . . . , bn consists in computing a binarization with respect to b1, . . . , bn for each suprabinary rule. By definition, every solution to this restricted problem is also a solution to the general problem. The converse need not be true. However, we can guarantee that the restricted problem has at least one solution whenever the general problem has one, by requiring v(bi(hi(α)) = v(b(hi(α)). Then the intersection in Lemma 2 is empty in the restricted case if and only if it is empty in the general case. We call the b-rules b1, . . . , b1 complete on G if the equation holds for every α ∈Σ. Now we show how to effectively compute binarization terms with respect to b1, . . . , bn, along the lines of Section 3.3. More specifically, we construct an RTG for each of the sets (i) bi(hi(α)), (ii) b′ i = v(bi(hi(α))), (iii) T i b′ i, and (iv) b′′ i = bi(hi(α))∩v−1(τ) (given τ). Then we can select τ from (iii) and ti from (iv) using a standard algorithm, such as the Viterbi algorithm or Knuth’s algorithm (Knuth, 1977; Nederhof, 2003; Huang and Chiang, 2005). The effectiveness of our procedure stems from the fact that we only manipulate RTGs and never enumerate languages. The construction for (i) is recursive, following the definition of bi. The base case is a language {xj}, for which the RTG is easy. For the recursive case, we use the fact that regular tree languages are closed under substitution (G´ecseg and Steinby, 1997, Prop. 7.3). Thus we obtain an RTG Gi with L(Gi) = bi(hi(α)). For (ii) and (iv), we need the following auxiliary 150 construction. Let Gi = (P, p0, R). We define the mapping vari : P →P(Xk) such that for every p ∈P, every t ∈Lp(Gi) contains exactly the variables in vari(p). We construct it as follows. We initialize vari(p) to “unknown” for every p. For every rule p →xj, we set vari(p) = {xj}. For every rule p →σ(p1, . . . , pk) such that vari(pj) is known, we set vari(p) = S j vari(pj). This is iterated; it can be shown that vari(p) is never assigned two different values for the same p. Finally, we set all remaining unknown entries to ∅. For (ii), we construct an RTG G′ i with L(G′ i) = b′ i as follows. We let G′ i = ({⟨vari(p)⟩| p ∈ P}, vari(p0), R′) where R′ consists of the rules ⟨{xj}⟩→{xj} if p →xi ∈R , ⟨vari(p)⟩→vari(p)(⟨U1⟩, . . . , ⟨Ul⟩⟩) if p →σ(p1, . . . , pk) ∈R, V = {vari(pj) | 1 ≤j ≤k} \ {∅}, |V | ≥2, seq(V ) = (U1, . . . , Ul) . For (iii), we use the standard product construction (G´ecseg and Steinby, 1997, Prop. 7.1). For (iv), we construct an RTG G′′ i such that L(G′′ i ) = b′′ i as follows. We let G′′ i = (P, p0, R′′), where R′′ consists of the rules p →σ(p1, . . . , pk) if p →σ(p1, . . . , pk) ∈R, V = {vari(pj) | 1 ≤j ≤k} \ {∅}, if |V | ≥2, then (vari(p), seq(V )) is a fork in τ . By a fork (u, u1 · · · uk) in τ, we mean that there is a node labeled u with k children labeled u1 up to uk. At this point we have all the ingredients for our binarization algorithm, shown in Algorithm 1. It operates directly on a bimorphism, because all the relevant information about the algebras is captured by the b-rules. The following theorem documents the behavior of the algorithm. In short, it solves the problem of rule-by-rule binarization with respect to b-rules b1, . . . , bn. Theorem 3 Let G = (B, A1, . . . , An) be an IRTG, and let b1, . . . , bn be b-rules over A1, . . . , An, respectively. Algorithm 1 terminates. Let B′ be the bimorphism computed by Algorithm 1 on B and b1, . . . , bn. Then G′ = (B′, A1, . . . , An) is equivalent to G, and G′ is of rank 2 if and only Input: bimorphism B = (G, h1, . . . , hn), b-rules b1, . . . , bn over ∆1, . . . , ∆n Output: bimorphism B′ 1: B′ ←(G|≤2, h1, . . . , hn) 2: for rule r: q →α(q1, . . . , qk) of G|>2 do 3: for i = 1, . . . , n do 4: compute RTG Gi for bi(hi(α)) 5: compute RTG G′ i for v(bi(hi(α))) 6: compute RTG Gv for T i L(G′ i) 7: if L(Gv) = ∅then 8: add r to B′ 9: else 10: select t′ ∈L(Gv) 11: for i = 1, . . . , n do 12: compute RTG G′′ i for 13: b′′ i = bi(hi(α)) ∩v−1(t′) 14: select ti ∈L(G′′ i ) 15: construct binarization for t1, . . . , tn 16: add appropriate rules to B′ Algorithm 1: Complete binarization algorithm, where G|≤2 and G|>2 is G restricted to binary and suprabinary rules, respectively. if every suprabinary rule of G has a binarization with respect to b1, . . . , bn. The runtime of Algorithm 1 is dominated by the intersection construction in line 6, which is O(m1· . . . · mn) per rule, where mi is the size of G′ i. The quantity mi is linear in the size of the terms on the right-hand side of hi, and in the number of rules in the b-rule bi. 5 Applications Algorithm 1 implements rule-by-rule binarization with respect to given b-rules. If a rule of the given IRTG does not have a binarization with respect to these b-rules, it is simply carried over to the new grammar, which then has a rank higher than 2. The number of remaining suprabinary rules depends on the b-rules (except for rules that have no binarization at all). The user can thus engineer the b-rules according to their current needs, trading off completeness, runtime, and engineering effort. By contrast, earlier binarization algorithms for formalisms such as SCFG and LCFRS simply attempt to find an equivalent grammar of rank 2; there is no analogue of our b-rules. The problem these algorithms solve corresponds to the general rule-by-rule binarization problem from Section 3. 151 NP NP DT the x1:NNP POS ’s x2:JJ x3:NN −→das x2 x3 der x1 Figure 7: A rule of a tree-to-string transducer. We show that under certain conditions, our algorithm can be used to solve this problem as well. In the following two subsections, we illustrate this for SCFGs and tree-to-string transducers, respectively. In the final subsection, we discuss how to extend this approach to other grammar formalisms as well. 5.1 Synchronous context-free grammars We have used SCFGs as the running example in this paper. SCFGs are IRTGs with two interpretations into the string algebra of Table 1, as illustrated by the example in Fig. 2. In order to make our algorithm ready to use, it remains to specify a b-rule for the string algeba. We use the following b-rule for both b1 and b2. Each symbol a ∈∆i|0 is mapped to the language {a}. Each symbol conk, k ≥2, is mapped to the language induced by the following RTG with states of the form [j, j′] (where 0 ≤j < j′ ≤k) and final state [0, k]: [j −1, j] →xj (1 ≤j ≤k) [j, j′] →con2([j, j′′], [j′′, j′]) (0 ≤j < j′′ < j′ ≤k) This language expresses all possible ways in which conk can be written in terms of con2. Our definition of rule-by-rule binarization with respect to b1 and b2 coincides with that of Huang et al. (2009): any rule can be binarized by both algorithms or neither. For instance, for the SCFG rule A →⟨BCDE, CEBD⟩, the sets v(b1(h1(α))) and v(b2(h2(α))) are disjoint, thus no binarization exists. Two strings of length N can be parsed with a binary IRTG that represents an SCFG in time O(N6). 5.2 Tree-to-string transducers Some approaches to SMT go beyond string-tostring translation models such as SCFG by exploiting known syntactic structures in the source or target language. This perspective on translation naturally leads to the use of tree-to-string transducers NP →α(NNP, JJ, NN) NP con3 NP con3 DT the con0 x1 POS ’s con0 x2 x3 h1 ←−[ α h2 7−→ con5 das x2 x3 der x1 Figure 8: An IRTG rule encoding the rule in Fig. 7. (Yamada and Knight, 2001; Galley et al., 2004; Huang et al., 2006; Graehl et al., 2008). Figure 7 shows an example of a tree-to-string rule. It might be used to translate “the Commission’s strategic plan” into “das langfristige Programm der Kommission”. Our algorithm can binarize tree-to-string transducers; to our knowledge, it is the first algorithm to do so. We model the tree-to-string transducer as an IRTG G = ((G, h1, h2), A1, A2), where A2 is the string algebra, but this time A1 is the tree algebra shown in Table 1. This algebra has operations conk to concatenate sequences of trees and unary γ that maps any sequence (t1, . . . , tl) of trees to the tree γ(t1, . . . , tl), viewed as a sequence of length 1. Note that we exclude the operation con1 because it is the identity and thus unnecessary. Thus the rule in Fig. 7 translates to the IRTG rule shown in Fig. 8. For the string algebra, we reuse the b-rule from Section 5.1; we call it b2 here. For the tree algebra, we use the following b-rule b1. It maps con0 to {con0} and each unary symbol γ to {γ(x1)}. Each symbol conk, k ≥2, is treated as in the string case. Using these b-rules, we can binarize the rule in Fig. 8 and obtain the rules in Fig. 9. Parsing of a binary IRTG that represents a tree-to-string transducer is O(N3 · M) for a string of length N and a tree with M nodes. We have implemented our binarization algorithm and the b-rules for the string and the tree algebra. In order to test our implementation, we extracted a tree-to-string transducer from about a million parallel sentences of English-German Europarl data, using the GHKM rule extractor (Galley, 2010). Then we binarized the transducer. The results are shown in Fig. 10. Of the 2.15 million rules in the extracted transducer, 460,000 were suprabinary, and 67 % of these could be binarized. Binarization took 4.4 minutes on a single core of an Intel Core i5 2520M processor. 152 NP →α′(NNP, A′) A′ →α′′(JJ, NN) NP con2 NP con2 DT the con0 con2 x1 POS ’s con0 x2 h′ 1 ←−[ α′ h′ 2 7−→ con2 con2 das x2 con2 der x1 con2 x1 x2 h′ 1 ←−[ α′′ h′ 2 7−→ con2 x1 x2 Figure 9: Binarization of the rule in Fig. 8. 1 1.2 1.4 1.6 1.8 2 2.2 2.4 ext bin # rules (millions) rank 0 1 2 3 4 5 6-7 8-10 Figure 10: Rules of a transducer extracted from Europarl (ext) vs. its binarization (bin). 5.3 General approach Our binarization algorithm can be used to solve the general rule-by-rule binarization problem for a specific grammar formalism, provided that one can find appropriate b-rules. More precisely, we need to devise a class C of IRTGs over the same sequence A1, . . . , An of algebras that encodes the grammar formalism, together with brules b1, . . . , bn over A1, . . . , An that are complete on every grammar in C, as defined in Section 4. We have already seen the b-rules for SCFGs and tree-to-string transducers in the preceding subsections; now we have a closer look at the class C for SCFGs. We used the class of all IRTGs with two string algebras and in which hi(α) contains at most one occurrence of a symbol conk for every α ∈Σ. On such a grammar the b-rules are complete. Note that this would not be the case if we allowed several occurrences of conk, as in con2(con2(x1, x2), x3). This term is equivalent to itself and to con2(x1, con2(x2, x3)), but the brules only cover the former. Thus they miss one variable tree. For the term con3(x1, x2, x3), however, the b-rules cover both variable trees. Generally speaking, given C and b-rules b1, . . . , bn that are complete on every IRTG in C, Algorithm 1 solves the general rule-by-rule binarization problem on C. We can adapt Theorem 3 by requiring that G must be in C, and replacing each of the two occurrences of “binarization with respect to b1, . . . , bn” by simply “binarization”. If C is such that every grammar from a given grammar formalism can be encoded as an IRTG in C, this solves the general rule-by-rule binarization problem of that grammar formalism. 6 Conclusion We have presented an algorithm for binarizing IRTGs rule by rule, with respect to b-rules that the user specifies for each algebra. This improves the complexity of parsing and translation with any monolingual or synchronous grammar that can be represented as an IRTG. A novel algorithm for binarizing tree-to-string transducers falls out as a special case. In this paper, we have taken the perspective that the binarized IRTG uses the same algebras as the original IRTG. Our algorithm extends to grammars of arbitrary fanout (such as synchronous tree-adjoining grammar (Koller and Kuhlmann, 2012)), but unlike LCFRS-based approaches to binarization, it will not increase the fanout to ensure binarizability. In the future, we will explore IRTG binarization with fanout increase. This could be done by binarizing into an IRTG with a more complicated algebra (e.g., of string tuples). We might compute binarizations that are optimal with respect to some measure (e.g., fanout (Gomez-Rodriguez et al., 2009) or parsing complexity (Gildea, 2010)) by keeping track of this measure in the b-rule and taking intersections of weighted tree automata. Acknowledgments We thank the anonymous referees for their insightful remarks, and Sarah Hemmen for implementing an early version of the algorithm. Matthias B¨uchse was financially supported by DFG VO 1011/6-1. 153 References Alfred V. Aho and Jeffrey D. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences, 3:37–56. Andr´e Arnold and Max Dauchet. 1976. Bitransduction de forˆets. In Proc. 3rd Int. Coll. Automata, Languages and Programming, pages 74–86. Edinburgh University Press. Walter S. Brainerd. 1969. Tree generating regular systems. Information and Control, 14(2):217–231. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Steve DeNeefe and Kevin Knight. 2009. Synchronous tree-adjoining machine translation. In Proceedings of EMNLP, pages 727–736. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proceedings of the 41st ACL, pages 205–208. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT/NAACL, pages 273–280. Michael Galley. 2010. GHKM rule extractor. http: //www-nlp.stanford.edu/˜mgalley/ software/stanford-ghkm-latest.tar. gz, retrieved on March 28, 2012. Ferenc G´ecseg and Magnus Steinby. 1997. Tree languages. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 3, chapter 1, pages 1–68. Springer-Verlag. Daniel Gildea. 2010. Optimal parsing strategies for linear context-free rewriting systems. In Proceedings of NAACL HLT. Joseph A. Goguen, Jim W. Thatcher, Eric G. Wagner, and Jesse B. Wright. 1977. Initial algebra semantics and continuous algebras. Journal of the ACM, 24:68–95. Carlos Gomez-Rodriguez, Marco Kuhlmann, Giorgio Satta, and David Weir. 2009. Optimal reduction of rule length in linear context-free rewriting systems. In Proceedings of NAACL HLT. Jonathan Graehl, Kevin Knight, and Jonathan May. 2008. Training tree transducers. Computational Linguistics, 34(3):391–427. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the 9th IWPT, pages 53– 64. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the 7th AMTA, pages 66–73. Liang Huang, Hao Zhang, Daniel Gildea, and Kevin Knight. 2009. Binarization of synchronous context-free grammars. Computational Linguistics, 35(4):559–595. Donald E. Knuth. 1977. A generalization of Dijkstra’s algorithm. Information Processing Letters, 6(1):1– 5. Alexander Koller and Marco Kuhlmann. 2011. A generalized view on parsing and translation. In Proceedings of the 12th IWPT, pages 2–13. Alexander Koller and Marco Kuhlmann. 2012. Decomposing TAG algorithms using simple algebraizations. In Proceedings of the 11th TAG+ Workshop, pages 135–143. Philip M. Lewis and Richard E. Stearns. 1966. Syntax directed transduction. Foundations of Computer Science, IEEE Annual Symposium on, 0:21–35. Mark-Jan Nederhof. 2003. Weighted deductive parsing and Knuth’s algorithm. Computational Linguistics, 29(1):135–143. Rebecca Nesson, Stuart M. Shieber, and Alexander Rush. 2006. Induction of probabilistic synchronous tree-insertion grammars for machine translation. In Proceedings of the 7th AMTA. Owen Rambow and Giorgio Satta. 1999. Independent parallelism in finite copying parallel rewriting systems. Theoretical Computer Science, 223(1–2):87– 120. David J. Weir. 1988. Characterizing Mildly ContextSensitive Grammar Formalisms. Ph.D. thesis, University of Pennsylvania. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of the 39th ACL, pages 523–530. 154
2013
15
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1527–1536, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Unsupervised Consonant-Vowel Prediction over Hundreds of Languages Young-Bum Kim and Benjamin Snyder University of Wisconsin-Madison {ybkim,bsnyder}@cs.wisc.edu Abstract In this paper, we present a solution to one aspect of the decipherment task: the prediction of consonants and vowels for an unknown language and alphabet. Adopting a classical Bayesian perspective, we performs posterior inference over hundreds of languages, leveraging knowledge of known languages and alphabets to uncover general linguistic patterns of typologically coherent language clusters. We achieve average accuracy in the unsupervised consonant/vowel prediction task of 99% across 503 languages. We further show that our methodology can be used to predict more fine-grained phonetic distinctions. On a three-way classification task between vowels, nasals, and nonnasal consonants, our model yields unsupervised accuracy of 89% across the same set of languages. 1 Introduction Over the past centuries, dozens of lost languages have been deciphered through the painstaking work of scholars, often after decades of slow progress and dead ends. However, several important writing systems and languages remain undeciphered to this day. In this paper, we present a successful solution to one aspect of the decipherment puzzle: automatically identifying basic phonetic properties of letters in an unknown alphabetic writing system. Our key idea is to use knowledge of the phonetic regularities encoded in known language vocabularies to automatically build a universal probabilistic model to successfully decode new languages. Our approach adopts a classical Bayesian perspective. We assume that each language has an unobserved set of parameters explaining its observed vocabulary. We further assume that each language-specific set of parameters was itself drawn from an unobserved common prior, shared across a cluster of typologically related languages. In turn, each cluster derives its parameters from a universal prior common to all language groups. This approach allows us to mix together data from languages with various levels of observations and perform joint posterior inference over unobserved variables of interest. At the bottom layer (see Figure 1), our model assumes a language-specific data generating HMM over words in the language vocabulary. Each word is modeled as an emitted sequence of characters, depending on a corresponding Markov sequence of phonetic tags. Since individual letters are highly constrained in their range of phonetic values, we make the assumption of one-tag-perobservation-type (e.g. a single letter is constrained to be always a consonant or always a vowel across all words in a language). Going one layer up, we posit that the languagespecific HMM parameters are themselves drawn from informative, non-symmetric distributions representing a typologically coherent language grouping. By applying the model to a mix of languages with observed and unobserved phonetic sequences, the cluster-level distributions can be inferred and help guide prediction for unknown languages and alphabets. We apply this approach to two small decipherment tasks: 1. predicting whether individual characters in an unknown alphabet and language represent vowels or consonants, and 2. predicting whether individual characters in an unknown alphabet and language represent vowels, nasals, or non-nasal consonants. For both tasks, our approach yields considerable 1527 success. We experiment with a data set consisting of vocabularies of 503 languages from around the world, written in a mix of Latin, Cyrillic, and Greek alphabets. In turn for each language, we consider it and its alphabet “unobserved” — we hide the graphic and phonetic properties of the symbols — while treating the vocabularies of the remaining languages as fully observed with phonetic tags on each of the letters. On average, over these 503 leave-one-languageout scenarios, our model predicts consonant/vowel distinctions with 99% accuracy. In the more challenging task of vowel/nasal/non-nasal prediction, our model achieves average accuracy over 89%. 2 Related Work The most direct precedent to the present work is a section in Knight et al. (2006) on universal phonetic decipherment. They build a trigram HMM with three hidden states, corresponding to consonants, vowels, and spaces. As in our model, individual characters are treated as the observed emissions of the hidden states. In contrast to the present work, they allow letters to be emitted by multiple states. Their experiments show that the HMM trained with EM successfully clusters Spanish letters into consonants and vowels. They further design a more sophisticated finite-state model, based on linguistic universals regarding syllable structure and sonority. Experiments with the second model indicate that it can distinguish sonorous consonants (such as n, m, l, r) from non-sonorous consonants in Spanish. An advantage of the linguistically structured model is that its predictions do not require an additional mapping step from uninterpreted hidden states to linguistic categories, as they do with the HMM. Our model and experiments can be viewed as complementary to the work of Knight et al., while also extending it to hundreds of languages. We use the simple HMM with EM as our baseline. In lieu of a linguistically designed model structure, we choose an empirical approach, allowing posterior inference over hundreds of known languages to guide the model’s decisions for the unknown script and language. In this sense, our model bears some similarity to the decipherment model of Snyder et al. (2010), which used knowledge of a related language (Hebrew) in an elaborate Bayesian framework to decipher the ancient language of Ugaritic. While the aim of the present work is more modest (discovering very basic phonetic properties of letters) it is also more widely applicable, as we don’t required detailed analysis of a known related language. Other recent work has employed a similar perspective for tying learning across languages. Naseem et al. (2009) use a non-parametric Bayesian model over parallel text to jointly learn part-of-speech taggers across 8 languages, while Cohen and Smith (2009) develop a shared logistic normal prior to couple multilingual learning even in the absence of parallel text. In similar veins, Berg-Kirkpatrick and Klein (2010) develop hierarchically tied grammar priors over languages within the same family, and BouchardCôté et al. (2013) develop a probabilistic model of sound change using data from 637 Austronesian languages. In our own previous work, we have developed the idea that supervised knowledge of some number of languages can help guide the unsupervised induction of linguistic structure, even in the absence of parallel text (Kim et al., 2011; Kim and Snyder, 2012)1. In the latter work we also tackled the problem of unsupervised phonemic prediction for unknown languages by using textual regularities of known languages. However, we assumed that the target language was written in a known (Latin) alphabet, greatly reducing the difficulty of the prediction task. In our present case, we assume no knowledge of any relationship between the writing system of the target language and known languages, other than that they are all alphabetic in nature. Finally, we note some similarities of our model to some ideas proposed in other contexts. We make the assumption that each observation type (letter) occurs with only one hidden state (consonant or vowel). Similar constraints have been developed for part-of-speech tagging (Lee et al., 2010; Christodoulopoulos et al., 2011), and the power of type-based sampling has been demonstrated, even in the absence of explicit model constraints (Liang et al., 2010). 3 Model Our generative Bayesian model over the observed vocabularies of hundreds of languages is 1We note that similar ideas were simultaneously proposed by other researchers (Cohen et al., 2011). 1528 1529 For example, the cluster Poisson parameter over vowel observation types might be λ = 9 (indicating 9 vowel letters on average for the cluster), while the parameter over consonant observation types might be λ = 20 (indicating 20 consonant letters on average). These priors will be distinct for each language cluster and serve to characterize its general linguistic and typological properties. We pause at this point to review the Dirichlet distribution in more detail. A k−dimensional Dirichlet with parameters α1 ...αk defines a distribution over the k −1 simplex with the following density: f(θ1 ... θk|α1 ... αk) ∝ Y i θαi−1 i where αi > 0, θi > 0, and P i θi = 1. The Dirichlet serves as the conjugate prior for the Multinomial, meaning that the posterior θ1...θk|X1...Xn is again distributed as a Dirichlet (with updated parameters). It is instructive to reparameterize the Dirichlet with k + 1 parameters: f(θ1 ... θk|α0, α′ 1 ... α′ k) ∝ Y i θα0α′ i−1 i where α0 = P i αi, and α′ i = αi/α0. In this parameterization, we have E[θi] = α′ i. In other words, the parameters α′ i give the mean of the distribution, and α0 gives the precision of the distribution. For large α0 ≫k, the distribution is highly peaked around the mean (conversely, when α0 ≪k, the mean lies in a valley). Thus, the Dirichlet parameters of a language cluster characterize both the average HMMs of individual languages within the cluster, as well as how much we expect the HMMs to vary from the mean. In the case of emission distributions, we assume symmetric Dirichlet priors — i.e. one-parameter Dirichlets with densities given by f(θ1...θk|β) ∝Q θ(β−1) i . This assumption is necessary, as we have no way to identify characters across languages in the decipherment scenario, and even the number of consonants and vowels (and thus multinomial/Dirichlet dimensions) can vary across the languages of a cluster. Thus, the mean of these Dirichlets will always be a uniform emission distribution. The single Dirichlet emission parameter per cluster will specify whether this mean is on a peak (large β) or in a valley (small β). In other words, it will control the expected sparsity of the resulting per-language emission multinomials. In contrast, the transition Dirichlet parameters may be asymmetric, and thus very specific and informative. For example, one cluster may have the property that CCC consonant clusters are exceedingly rare across all its languages. This property would be expressed by a very small mean α′ CCC ≪1 but large precision α0. Later we shall see examples of learned transition Dirichlet parameters. 3.3 Cluster Generation The generation of the cluster parameters (Algorithm 1) defines the highest layer of priors for our model. As Dirichlets lack a standard conjugate prior, we simply use uniform priors over the interval [0, 500]. For the cluster Poisson parameters, we use conjugate Gamma distributions with vague priors.3 4 Inference In this section we detail the inference procedure we followed to make predictions under our model. We run the procedure over data from 503 languages, assuming that all languages but one have observed character and tag sequences: w1, w2, . . . , t1, t2, . . . Since each character type w is assumed to have a single tag category, this is equivalent to observing the character token sequence along with a character-type-to-tag mapping tw. For the target language, we observe only character token sequence w1, w2, . . . We assume fixed and known parameter values only at the cluster generation level. Unobserved variables include (i) the cluster parameters α, β, λ, (ii) the cluster assignments z, (iii) the perlanguage HMM parameters θ, φ for all languages, and (iv) for the target language, the tag tokens t1, t2, . . . — or equivalently the character-type-totag mappings tw — along with the observation type-counts Nt. 4.1 Monte Carlo Approximation Our goal in inference is to predict the most likely tag tw,ℓfor each character type w in our target language ℓaccording to the posterior: f (tw,ℓ| w, t−ℓ) = ˆ f (tℓ, z, α, β | w, t−ℓ) d Θ (1) 3(1,19) for consonants, (1,10) for vowels, (0.2, 15) for nasals, and (1,16) for non-nasal consonants. 1530 where Θ = (t−w,ℓ, z, α, β), w are the observed character sequences for all languages, t−ℓare the character-to-tag mappings for the observed languages, z are the language-to-cluster assignments, and α and β are all the cluster-level transition and emission Dirichlet parameters. Sampling values (tℓ, z, α, β)N n=1 from the integrand in Equation 1 allows us to perform the standard Monte Carlo approximation: f (tw,ℓ= t | w, t−ℓ) ≈N−1 N X n=1 I (tw,ℓ= t in sample n) (2) To maximize the Monte Carlo posterior, we simply take the most commonly sampled tag value for character type w in language ℓ. Note that we leave out the language-level HMM parameters (θ, φ) as well as the cluster-level Poisson parameters λ from Equation 1 (and thus our sample space), as we can analytically integrate them out in our sampling equations. 4.2 Gibbs Sampling To sample values (tℓ, z, α, β) from their posterior (the integrand of Equation 1), we use Gibbs sampling, a Monte Carlo technique that constructs a Markov chain over a high-dimensional sample space by iteratively sampling each variable conditioned on the currently drawn sample values for the others, starting from a random initialization. The Markov chain converges to an equilibrium distribution which is in fact the desired joint density (Geman and Geman, 1984). We now sketch the sampling equations for each of our sampled variables. Sampling tw,ℓ To sample the tag assignment to character w in language ℓ, we need to compute: f (tw,ℓ| w, t−w,ℓ, t−ℓ, z, α, β) (3) ∝f (wℓ, tℓ, Nℓ| αk, βk, Nk−ℓ) (4) where Nℓare the types-per-tag counts implied by the mapping tℓ, k is the current cluster assignment for the target language (zℓ= k), αk and βk are the cluster parameters, and Nk−ℓare the types-per-tag counts for all languages currently assigned to the cluster, other than language ℓ. Applying the chain rule along with our model’s conditional independence structure, we can further re-write Equation 4 as a product of three terms: f(Nℓ|Nk−ℓ) (5) f(t1, t2, . . . |αk) (6) f(w1, w2, . . . |Nℓ, t1, t2, . . . , βk) (7) The first term is the posterior predictive distribution for the Poisson-Gamma compound distribution and is easy to derive. The second term is the tag transition predictive distribution given Dirichlet hyperparameters, yielding a familiar Polya urn scheme form. Removing terms that don’t depend on the tag assignment tℓ,w gives us: Q t,t′ αk,t,t′ + n(t, t′) [n′(t,t′)] Q t P t′ αk,t,t′ + n(t) [n′(t)] where n(t) and n(t, t′) are, respectively, unigram and bigram tag counts excluding those containing character w. Conversely, n′(t) and n′(t, t′) are, respectively, unigram and bigram tag counts only including those containing character w. The notation a[n] denotes the ascending factorial: a(a + 1) · · · (a+n−1). Finally, we tackle the third term, Equation 7, corresponding to the predictive distribution of emission observations given Dirichlet hyperparameters. Again, removing constant terms gives us: β[n(w)] k,t Q t′ Nℓ,t′β[n(t′)] k,t′ where n(w) is the unigram count of character w, and n(t′) is the unigram count of tag t, over all characters tokens (including w). Sampling αk,t,t′ To sample the Dirichlet hyperparameter for cluster k and transition t →t′, we need to compute: f(αk,t,t′|t, z) ∝f(t, z|αz,t,t′) = f(tk|αz,t,t′) where tk are the tag sequences for all languages currently assigned to cluster k. This term is a predictive distribution of the multinomial-Dirichlet compound when the observations are grouped into multiple multinomials all with the same prior. Rather than inefficiently computing a product of Polya urn schemes (with many repeated ascending 1531 factorials with the same base), we group common terms together and calculate: Q j=1(αk,t,t′ + k)n(j,k,t,t′) Q j=1(P t′′ αk,t,t′′ + k)n(j,k,t) where n(j, k, t) and n(j, k, t, t′) are the numbers of languages currently assigned to cluster k which have more than j occurrences of unigram (t) and bigram (t, t′), respectively. This gives us an efficient way to compute unnormalized posterior densities for α. However, we need to sample from these distributions, not just compute them. To do so, we turn to slice sampling (Neal, 2003), a simple yet effective auxiliary variable scheme for sampling values from unnormalized but otherwise computable densities. The key idea is to supplement the variable x, distributed according to unnormalized density ˜p(x), with a second variable u with joint density defined as p(x, u) ∝I(u < ˜p(x)). It is easy to see that ˜p(x) ∝ ´ p(x, u)du. We then iteratively sample u|x and x|u, both of which are distributed uniformly across appropriately bounded intervals. Our implementation follows the pseudocode given in Mackay (2003). Sampling βk,t To sample the Dirichlet hyperparameter for cluster k and tag t we need to compute: f(βk,t|t, w, z, N) ∝f(w|t, z, βk,t, N) ∝f(wk|tk, βk,t, Nk) where, as before, tk are the tag sequences for languages assigned to cluster k, Nk are the tag observation type-counts for languages assigned to the cluster, and likewise wk are the character sequences of all languages in the cluster. Again, we have the predictive distribution of the multinomial-Dirichlet compound with multiple grouped observations. We can apply the same trick as above to group terms in the ascending factorials for efficient computation. As before, we use slice sampling for obtaining samples. Sampling zℓ Finally, we consider sampling the cluster assignment zℓfor each language ℓ. We calculate: f(zℓ= k|w, t, N, z−ℓ, α, β) ∝f(wℓ, tℓ, Nℓ|αk, βk, Nk−ℓ) = f(Nℓ|Nk−ℓ)f(tℓ|αk)f(wℓ|tℓ, Nℓ, βk) The three terms correspond to (1) a standard predictive distributions for the Poisson-gamma compound and (2) the standard predictive distributions for the transition and emission multinomialDirichlet compounds. 5 Experiments To test our model, we apply it to a corpus of 503 languages for two decipherment tasks. In both cases, we will assume no knowledge of our target language or its writing system, other than that it is alphabetic in nature. At the same time, we will assume basic phonetic knowledge of the writing systems of the other 502 languages. For our first task, we will predict whether each character type is a consonant or a vowel. In the second task, we further subdivide the consonants into two major categories: the nasal consonants, and the nonnasal consonants. Nasal consonants are known to be perceptually very salient and are unique in being high frequency consonants in all known languages. 5.1 Data Our data is drawn from online electronic translations of the Bible (http://www.bible.is, http://www.crosswire.org/index. jsp, and http://www.biblegateway. com). We have identified translations covering 503 distinct languages employing alphabetic writing systems. Most of these languages (476) use variants of the Latin alphabet, a few (26) use Cyrillic, and one uses the Greek alphabet. As Table 1 indicates, the languages cover a very diverse set of families and geographic regions, with Niger-Congo languages being the largest represented family.4 Of these languages, 30 are either language isolates, or sole members of their language family in our data set. For our experiments, we extracted unique word types occurring at least 5 times from the downloaded Bible texts. We manually identified vowel, nasal, and non-nasal character types. Since the letter “y” can frequently represent both a consonant and vowel, we exclude it from our evaluation. On average, the resulting vocabularies contain 2,388 unique words, with 19 consonant characters, two 2 nasal characters, and 9 vowels. We include the data as part of the paper. 4In fact, the Niger-Congo grouping is often considered the largest language family in the world in terms of distinct member languages. 1532 Language Family #lang Niger-Congo 114 Austronesian 67 Oto-Manguean 41 Indo-European 39 Mayan 34 Quechuan 17 Afro-Asiatic 17 Uto-Aztecan 16 Altaic 16 Trans-New Guinea 15 Nilo-Saharan 14 Sino-Tibetan 13 Tucanoan 9 Creole 8 Chibchan 6 Maipurean 5 Tupian 5 Nakh-Daghestanian 4 Uralic 4 Cariban 4 Totonacan 4 Mixe-Zoque 3 Jivaroan 3 Choco 3 Guajiboan 2 Huavean 2 Austro-Asiatic 2 Witotoan 2 Jean 2 Paezan 2 Other 30 Table 1: Language families in our data set. The Other category includes 9 language isolates and 21 language family singletons. 5.2 Baselines and Model Variants As our baseline, we consider the trigram HMM model of Knight et al. (2006), trained with EM. In all experiments, we run 10 random restarts of EM, and pick the prediction with highest likelihood. We map the induced tags to the gold-standard tag categories (1-1 mapping) in the way that maximizes accuracy. We then consider three variants of our model. The simplest version, SYMM, disregards all information from other languages, using simple symmetric hyperparameters on the transition and emission Dirichlet priors (all hyperparameters set to 1). This allows us to assess the performance of Model Cons vs Vowel C vs V vs N All EM 93.37 74.59 SYMM 95.99 80.72 MERGE 97.14 86.13 CLUST 98.85 89.37 Isolates EM 94.50 74.53 SYMM 96.18 78.13 MERGE 97.66 86.47 CLUST 98.55 89.07 Non-Latin EM 92.93 78.26 SYMM 95.90 79.04 MERGE 96.06 83.78 CLUST 97.03 85.79 Table 2: Average accuracy for EM baseline and model variants across 503 languages. First panel: results on all languages. Second panel: results for 30 isolate and singleton languages. Third panel: results for 27 non-Latin alphabet languages (Cyrillic and Greek). Standard Deviations across languages are about 2%. our Gibbs sampling inference method for the typebased HMM, even in the absence of multilingual priors. We next consider a variant of our model, MERGE, that assumes that all languages reside in a single cluster. This allows knowledge from the other languages to affect our tag posteriors in a generic, language-neutral way. Finally, we consider the full version of our model, CLUST, with 20 language clusters. By allowing for the division of languages into smaller groupings, we hope to learn more specific parameters tailored for typologically coherent clusters of languages. 6 Results The results of our experiments are shown in Table 2. In all cases, we report token-level accuracy (i.e. frequent characters count more than infrequent characters), and results are macro-averaged over the 503 languages. Variance across languages is quite low: the standard deviations are about 2 percentage points. For the consonant vs. vowel prediction task, all tested models perform well. Our baseline, the EM-based HMM, achieves 93.4% accuracy. Simply using our Gibbs sampler with symmetric priors boosts the performance up to 96%. Performance 1533 1534 Figure 4: Inferred Dirichlet transition hyperparameters for bigram CLUST on three-way classification task with four latent clusters. Row gives starting state, column gives target state. Size of red blobs are proportional to magnitude of corresponding hyperparameters. Language Family Portion #langs Ent. Indo-European 0.38 26 2.26 0.24 41 3.19 0.21 38 3.77 Quechuan 0.89 18 0.61 Mayan 0.64 33 1.70 Oto-Manguean 0.55 31 1.99 Maipurean 0.25 8 2.75 Tucanoan 0.2 45 3.98 Uto-Aztecan 0.4 25 2.85 Altaic 0.44 27 2.76 Niger-Congo 1 2 0.00 0.78 23 1.26 0.74 27 1.05 0.68 22 1.22 0.67 33 1.62 0.5 18 2.21 0.24 25 3.27 Austronesian 0.91 22 0.53 0.71 21 1.51 0.24 17 3.06 Table 3: Plurality language families across 20 clusters. The columns indicate portion of languages in the plurality family, number of languages, and entropy over families. with a bigram HMM with four language clusters. Examining just the first row, we see that the languages are partially grouped by their preference for the initial tag of words. All clusters favor languages which prefer initial consonants, though this preference is most weakly expressed in cluster 3. In contrast, both clusters 2 and 4 have very dominant tendencies towards consonant-initial languages, but differ in the relative weight given to languages preferring either vowels or nasals initially. Finally, we examine the relationship between the induced clusters and language families in Table 3, for the trigram consonant vs. vowel CLUST model with 20 clusters. We see that for about half the clusters, there is a majority language family, most often Niger-Congo. We also observe distinctive clusters devoted to Austronesian and Quechuan languages. The largest two clusters are rather indistinct, without any single language family achieving more than 24% of the total. 8 Conclusion In this paper, we presented a successful solution to one aspect of the decipherment task: the prediction of consonants and vowels for an unknown language and alphabet. Adopting a classical Bayesian perspective, we develop a model that performs posterior inference over hundreds of languages, leveraging knowledge of known languages to uncover general linguistic patterns of typologically coherent language clusters. Using this model, we automatically distinguish between consonant and vowel characters with nearly 99% accuracy across 503 languages. We further experimented on a three-way classification task involving nasal characters, achieving nearly 90% accuracy. Future work will take us in several new directions: first, we would like to move beyond the assumption of an alphabetic writing system so that we can apply our method to undeciphered syllabic scripts such as Linear A. We would also like to extend our methods to achieve finer-grained resolution of phonetic properties beyond nasals, consonants, and vowels. Acknowledgments The authors thank the reviewers and acknowledge support by the NSF (grant IIS-1116676) and a research gift from Google. Any opinions, findings, or conclusions are those of the authors, and do not necessarily reflect the views of the NSF. 1535 References Taylor Berg-Kirkpatrick and Dan Klein. 2010. Phylogenetic grammar induction. In Proceedings of the ACL, pages 1288–1297. Association for Computational Linguistics. Alexandre Bouchard-Côté, David Hall, Thomas L Griffiths, and Dan Klein. 2013. Automated reconstruction of ancient languages using probabilistic models of sound change. Proceedings of the National Academy of Sciences, 110(11):4224–4229. Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2011. A Bayesian mixture model for partof-speech induction using multiple features. In Proceedings of EMNLP, pages 638–647. Association for Computational Linguistics. Shay B Cohen and Noah A Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of NAACL, pages 74– 82. Association for Computational Linguistics. Shay B Cohen, Dipanjan Das, and Noah A Smith. 2011. Unsupervised structure prediction with non-parallel multilingual guidance. In Proceedings of EMNLP, pages 50–61. Association for Computational Linguistics. Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, (6):721–741. Young-Bum Kim and Benjamin Snyder. 2012. Universal grapheme-to-phoneme prediction over latin alphabets. In Proceedings of EMNLP, pages 332–343, Jeju Island, South Korea, July. Association for Computational Linguistics. Young-Bum Kim, João V Graça, and Benjamin Snyder. 2011. Universal morphological analysis using structured nearest neighbor prediction. In Proceedings of EMNLP, pages 322–332. Association for Computational Linguistics. Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of COLING/ACL, pages 499–506. Association for Computational Linguistics. Yoong Keok Lee, Aria Haghighi, and Regina Barzilay. 2010. Simple type-level unsupervised POS tagging. In Proceedings of EMNLP, pages 853–861. Association for Computational Linguistics. Percy Liang, Michael I Jordan, and Dan Klein. 2010. Typebased MCMC. In Proceedings of NAACL, pages 573–581. Association for Computational Linguistics. David JC MacKay. 2003. Information Theory, Inference and Learning Algorithms. Cambridge University Press. Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, and Regina Barzilay. 2009. Multilingual part-of-speech tagging: Two unsupervised approaches. Journal of Artificial Intelligence Research, 36(1):341–385. Radford M Neal. 2003. Slice sampling. Annals of statistics, 31:705–741. Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proceedings of the ACL, pages 1048–1057. Association for Computational Linguistics. 1536
2013
150
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1537–1546, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Improving Text Simplification Language Modeling Using Unsimplified Text Data David Kauchak Middlebury College Middlebury, VT 05753 [email protected] Abstract In this paper we examine language modeling for text simplification. Unlike some text-to-text translation tasks, text simplification is a monolingual translation task allowing for text in both the input and output domain to be used for training the language model. We explore the relationship between normal English and simplified English and compare language models trained on varying amounts of text from each. We evaluate the models intrinsically with perplexity and extrinsically on the lexical simplification task from SemEval 2012. We find that a combined model using both simplified and normal English data achieves a 23% improvement in perplexity and a 24% improvement on the lexical simplification task over a model trained only on simple data. Post-hoc analysis shows that the additional unsimplified data provides better coverage for unseen and rare n-grams. 1 Introduction An important component of many text-to-text translation systems is the language model which predicts the likelihood of a text sequence being produced in the output language. In some problem domains, such as machine translation, the translation is between two distinct languages and the language model can only be trained on data in the output language. However, some problem domains (e.g. text compression, text simplification and summarization) can be viewed as monolingual translation tasks, translating between text variations within a single language. In these monolingual problems, text could be used from both the input and output domain to train a language model. In this paper, we investigate this possibility for text simplification where both simplified English text and normal English text are available for training a simple English language model. Table 1 shows the n-gram overlap proportions in a sentence aligned data set of 137K sentence pairs from aligning Simple English Wikipedia and English Wikipedia articles (Coster and Kauchak, 2011a).1 The data highlights two conflicting views: does the benefit of additional data outweigh the problem of the source of the data? Throughout the rest of this paper we refer to sentences/articles/text from English Wikipedia as normal and sentences/articles/text from Simple English Wikipedia as simple. On the one hand, there is a strong correspondence between the simple and normal data. At the word level 96% of the simple words are found in the normal corpus and even for n-grams as large as 5, more than half of the n-grams can be found in the normal text. In addition, the normal text does represent English text and contains many n-grams not seen in the simple corpus. This extra information may help with data sparsity, providing better estimates for rare and unseen n-grams. On the other hand, there is still only modest overlap between the sentences for longer n-grams, particularly given that the corpus is sentencealigned and that 27% of the sentence pairs in this aligned data set are identical. If the word distributions were very similar between simple and normal text, then the overlap proportions between the two languages would be similar regardless of which direction the comparison is made. Instead, we see that the normal text has more varied language and contains more n-grams. Previous research has also shown other differences between simple and normal data sources that could impact language model performance including average number of syllables, reading 1http://www.cs.middlebury.edu/˜dkauchak/simplification 1537 n-gram size: 1 2 3 4 5 simple in normal 0.96 0.80 0.68 0.61 0.55 normal in simple 0.87 0.68 0.58 0.51 0.46 Table 1: The proportion of n-grams that overlap in a corpus of 137K sentence-aligned pairs from Simple English Wikipedia and English Wikipedia. complexity, and grammatical complexity (Napoles and Dredze, 2010; Zhu et al., 2010; Coster and Kauchak, 2011b). In addition, for some monolingual translation domains, it has been argued that it is not appropriate to train a language model using data from the input domain (Turner and Charniak, 2005). Although this question arises in other monolingual translation domains, text simplification represents an ideal problem area for analysis. First, simplified text data is available in reasonable quantities. Simple English Wikipedia contains more than 60K articles written in simplified English. This is not the case for all monolingual translation tasks (Knight and Marcu, 2002; Cohn and Lapata, 2009). Second, the quantity of simple text data available is still limited. After preprocessing, the 60K articles represents less than half a million sentences which is orders of magnitude smaller than the amount of normal English data available (for example the English Gigaword corpus (David Graff, 2003)). Finally, many recent text simplification systems have utilized language models trained only on simplified data (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011a; Wubben et al., 2012); improvements in simple language modeling could translate into improvements for these systems. 2 Related Work If we view the normal data as out-of-domain data, then the problem of combining simple and normal data is similar to the language model domain adaption problem (Suzuki and Gao, 2005), in particular cross-domain adaptation (Bellegarda, 2004) where a domain-specific model is improved by incorporating additional general data. Adaptation techniques have been shown to improve language modeling performance based on perplexity (Rosenfeld, 1996) and in application areas such as speech transcription (Bacchiani and Roark, 2003) and machine translation (Zhao et al., 2004), though no previous research has examined the language model domain adaptation problem for text simplification. Pan and Yang (2010) provide a survey on the related problem of domain adaptation for machine learning (also referred to as “transfer learning”), which utilizes similar techniques. In this paper, we explore some basic adaptation techniques, however this paper is not a comparison of domain adaptation techniques for language modeling. Our goal is more general: to examine the relationship between simple and normal data and determine whether normal data is helpful. Previous domain adaptation research is complementary to our experiments and could be explored in the future for additional performance improvements. Simple language models play a role in a variety of text simplification applications. Many recent statistical simplification techniques build upon models from machine translation and utilize a simple language model during simplification/decoding both in English (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011a; Wubben et al., 2012) and in other languages (Specia, 2010). Simple English language models have also been used as predictive features in other simplification sub-problems such as lexical simplification (Specia et al., 2012) and predicting text simplicity (Eickhoff et al., 2010). Due to data scarcity, little research has been done on language modeling in other monolingual translation domains. For text compression, most systems are trained on uncompressed data since the largest text compression data sets contain only a few thousand sentences (Knight and Marcu, 2002; Galley and McKeown, 2007; Cohn and Lapata, 2009; Nomoto, 2009). Similarly for summarization, systems that have employed language models trained only on unsummarized text (Banko et al., 2000; Daume and Marcu, 2002). 3 Corpus We collected a data set from English Wikipedia and Simple English Wikipedia with the former representing normal English and the latter simple English. Simple English Wikipedia has been previously used for many text simplification approaches (Zhu et al., 2010; Yatskar et al., 2010; Biran et al., 2011; Coster and Kauchak, 2011a; Woodsend and Lapata, 2011; Wubben et al., 2012) and has been shown to be simpler than normal English Wikipedia by both automatic measures and human perception (Coster and Kauchak, 2011b; 1538 simple normal sentences 385K 2540K words 7.15M 64.7M vocab size 78K 307K Table 2: Summary counts for the simple-normal article aligned data set consisting of 60K article pairs. Woodsend and Lapata, 2011). We downloaded all articles from Simple English Wikipedia then removed stubs, navigation pages and any article that consisted of a single sentence, resulting in 60K simple articles. To partially normalize for content and source differences we generated a document aligned corpus for our experiments. We extracted the corresponding 60K normal articles from English Wikipedia based on the article title to represent the normal data. We held out 2K article pairs for use as a testing set in our experiments. The extracted data set is available for download online.2 Table 2 shows count statistics for the collected data set. Although the simple and normal data contain the same number of articles, because normal articles tend to be longer and contain more content, the normal side is an order of magnitude larger. 4 Language Model Evaluation: Perplexity To analyze the impact of data source on simple English language modeling, we trained language models on varying amounts of simple data, normal data, and a combination of the two. For our first task, we evaluated these language models using perplexity based on how well they modeled the simple side of the held-out data. 4.1 Experimental Setup We used trigram language models with interpolated Kneser-Kney discounting trained using the SRI language modeling toolkit (Stolcke, 2002). To ensure comparability, all models were closed vocabulary with the same vocabulary set based on the words that occurred in the simple side of the training corpus, though similar results were seen for other vocabulary choices. We generated different models by varying the size and type of training 2http://www.cs.middlebury.edu/˜dkauchak/simplification 100 150 200 250 300 350 0.5M 1M 1.5M 2M 2.5M 3M perplexity total number of sentences simple-only normal-only simple-ALL+normal Figure 1: Language model perplexities on the held-out test data for models trained on increasing amounts of data. data: - simple-only: simple sentences only - normal-only: normal sentences only - simple-X+normal: X simple sentences combined with a varying number of normal sentences To evaluate the language models we calculated the model perplexity (Chen et al., 1998) on the simple side of the held-out data. The test set consisted of 2K simple English articles with 7,799 simple sentences and 179K words. Perplexity measures how likely a model finds a test set, with lower scores indicating better performance. 4.2 Perplexity Results Figure 1 shows the language model perplexities for the three types of models for increasing amounts of training data. As expected, when trained on the same amount of data, the language models trained on simple data perform significantly better than language models trained on normal data. In addition, as we increase the amount of data, the simple-only model improves more than the normal-only model. However, the results also show that the normal data does have some benefit. The perplexity for the simple-ALL+normal model, which starts with all available simple data, continues to improve as normal data is added resulting in a 23% improvement over the model trained with only simple data (from a perplexity of 129 down to 100). Even by itself the normal data does have value. The normal-only model achieves a slightly better perplexity than the simple-only model, though only by utilizing an order of magnitude more data. 1539 100 150 200 250 300 0.5M 1M 1.5M 2M 2.5M perplexity number of additional normal sentences simple-50k+normal simple-100k+normal simple-150k+normal simple-200k+normal simple-250k+normal simple-300k+normal simple-350k+normal Figure 2: Language model perplexities for combined simple-normal models. Each line represents a model trained on a different amount of simple data as normal data is added. To better understand how the amount of simple and normal data impacts perplexity, Figure 2 shows perplexity scores for models trained on varying amounts of simple data as we add increasing amounts of normal data. We again see that normal data is beneficial; regardless of the amount of simple data, adding normal data improves perplexity. This improvement is most beneficial when simple data is limited. Models trained on less simple data achieved larger performance increases than those models trained on more simple data. Figure 2 also shows again that simple data is more valuable than normal data. For example, the simple-only model trained on 250K sentences achieves a perplexity of approximately 150. To achieve this same perplexity level starting with 200K simple sentences requires an additional 300K normal sentences, or starting with 100K simple sentences an additional 850K normal sentences. 4.3 Language Model Adaptation In the experiments above, we generated the language models by treating the simple and normal data as one combined corpus. This approach has the benefit of simplicity, however, better performance for combining related corpora has been seen by domain adaptation techniques which combine the data in more structured ways (Bacchiani and Roark, 2003). Our goal for this paper is not to explore domain adaptation techniques, but to determine if normal data is useful for the simple language modeling task. However, to provide another dimension for comparison and to understand 95 100 105 110 115 120 125 130 simple-only 0.2 0.4 0.6 0.8 normal-only perplexity lambda Figure 3: Perplexity scores for a linearly interpolated model between the simple-only model and the normal-only model for varying lambda values. if domain adaptation techniques may be useful, we also investigated a linearly interpolated language model. A linearly interpolated language model combines the probabilities of two or more language models as a weighted sum. In our case, the interpolated model combines the simple model estimate, ps(wi|wi−2, wi−1), and the normal model estimate, pn(wi|wi−2, wi−1), linearly (Jelinek and Mercer, 1980; Hsu, 2007): pinterpolated(wi|wi−2, wi−1) = λ pn(wi|wi−2, wi−1) + (1 −λ) ps(wi|wi−2, wi−1) where 0 ≥λ ≥1. Figure 3 shows perplexity scores for varying lambda values ranging from the simple-only model on the left with λ = 0 to the normal-only model on the right with λ = 1. As with the previous experiments, adding normal data improves improves perplexity. In fact, with a lambda of 0.5 (equal weight between the models) the performance is slightly better than the aggregate approaches above with a perplexity of 98. The results also highlight the balance between simple and normal data; normal data is not as good as simple data and adding too much of it can cause the results to degrade. 5 Language Model Evaluation: Lexical Simplification Currently, no automated methods exist for evaluating sentence-level or document-level text simplification systems and manual evaluation is timeconsuming, expensive and has not been validated. Because of these evaluation challenges, we chose to evaluate the language models extrinsi1540 Word: tight Context: With the physical market as tight as it has been in memory, silver could fly at any time. Candidates: constricted, pressurised, low, high-strung, tight Human ranking: tight, low, constricted, pressurised, high-strung Figure 4: A lexical substitution example from the SemEval 2012 data set. cally based on the lexical simplification task from SemEval 2012 (Specia et al., 2012). Lexical simplification is a sub-problem of the general text simplification problem (Chandrasekar and Srinivas, 1997); a sentence is simplified by substituting words or phrases in the sentence with “simpler” variations. Lexical simplification approaches have been shown to improve the readability of texts (Urano, 2000; Leroy et al., 2012), are useful in domains such as medical texts where major content changes are restricted, and they may be useful as a pre- or post-processing step for general simplification systems. 5.1 Experimental Setup Examples from the lexical simplification data set from SemEval 2012 consist of three parts: w, the word to be simplified; s1, ..., si−1, w, si+1, ..., sn, a sentence containing the word; and, r1, r2, ..., rm, a list of candidate simplifications for w. The goal of the task is to rank the candidate simplifications according to their simplicity in the context of the sentence. Figure 4 shows an example from the data set. The data set contains a development set of 300 examples and a test set of 1710 examples.3 For our experiments, we evaluated the models on the test set. Given a language model p(·) and a lexical simplification example, we ranked the list of candidates based on the probability the language model assigns to the sentence with the candidate simplification inserted in context. Specifically, we scored each candidate simplification rj by p(s1... si−1 rj si+1... sn) and then ranked them based on this score. For example, to calculate the ranking for the example in Figure 4 we calculate the probability of each of: With the physical market as constricted as it has been ... With the physical market as pressurised as it has been ... With the physical market as low as it has been ... With the physical market as high-strung as it has been ... With the physical market as tight as it has been ... with the language model and then rank them by their probability. We do not suggest this as a com3http://www.cs.york.ac.uk/semeval-2012/task1/ 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.5M 1M 1.5M 2M 2.5M 3M kappa rank score total number of sentences simple-only normal-only simple-ALL+normal Figure 5: Kappa rank scores for the models trained on increasing amounts of data. plete lexical substitution system, but it was a common feature for many of the submitted systems, it performs well relative to the other systems, and it allows for a concrete comparison between the language models on a simplification task. To evaluate the rankings, we use the metric from the SemEval 2012 task, the Cohen’s kappa coefficient (Landis and Koch, 1977) between the system ranking and the human ranking, which we denote the “kappa rank score”. See Specia et al. (2012) for the full details of how the evaluation metric is calculated. We use the same setup for training the language models as in the perplexity experiments except the models are open vocabulary instead of closed. Open vocabulary models allow for the language models to better utilize the varying amounts of data and since the lexical simplification problem only requires a comparison of probabilities within a given model to produce the final ranking, we do not need the closed vocabulary requirement. 5.2 Lexical Simplification Results Figure 5 shows the kappa rank scores for the simple-only, normal-only and combined models. As with the perplexity results, for similar amounts of data the simple-only model performs better than the normal-only model. We also again see that the performance difference between the two models grows as the amount of data increases. However, 1541 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.5M 1M 1.5M 2M 2.5M kappa rank score number of additional normal sentences normal-only simple-100k+normal simple-150k+normal simple-200k+normal simple-250k+normal simple-300k+normal simple-350k+normal simple-ALL+normal Figure 6: Kappa rank scores for models trained with varying amounts of simple data combined with increasing amounts of normal data. unlike the perplexity results, simply appending additional normal data to the entire simple data set does not improve the performance of the lexical simplifier. To determine if additional normal data improves the performance for models trained on smaller amounts of simple data, Figure 6 shows the kappa rank scores for models trained on different amounts of simple data as additional normal data is added. For smaller amounts of simple data adding normal data does improve the kappa rank score. For example, a language model trained with 100K simple sentences achieves a score of 0.246 and is improved by almost 40% to 0.344 by adding all of the additional normal data. Even the performance of a model trained with 300K simple sentences is increased by 3% (0.01 improvement in kappa rank score) by adding normal data. 5.3 Language Model Adaptation The results in the previous section show that adding normal data to a simple data set can improve the lexical simplifier if the amount of simple data is limited. To investigate this benefit further, we again generated linearly interpolated language models between the simple-only model and the normal-only model. Figure 7 shows results for the same experimental design as Figure 6 with varying amounts of simple and normal data, however, rather than appending the normal data we trained the models separately and created a linearly interpolated model as described in Section 4.3. The best lambda was chosen based on a linear search optimized on the SemEval 2012 development set. For all starting amounts of simple data, interpo 0.24 0.27 0.3 0.33 0.36 0.39 0.42 0.5M 1M 1.5M 2M 2.5M kappa rank score number of additional normal sentences simple-100k simple-150k simple-200k simple-250k simple-300k simple-350k simple-ALL Figure 7: Kappa rank scores for linearly interpolated models between simple-only and normalonly models trained with varying amounts of simple and normal data. lating the simple model with the normal model results in a large increase in the kappa rank score. Combining the model trained on all the simple data with the model trained on all the normal data achieves a score of 0.419, an improvement of 23% over the model trained on only simple data. Although our goal was not to create the best lexical simplification system, this approach would have ranked 6th out of 11 submitted systems in the SemEval 2012 competition (Specia et al., 2012). Interestingly, although the performance of the simple-only models varied based on the amount of simple data, when these models are interpolated with a model trained on normal data, the performance tended to converge. This behavior is also seen in Figure 6, though to a lesser extent. This may indicate that for some tasks like lexical simplification, only a modest amount of simple data is required when combining with additional normal data to achieve reasonable performance. 6 Why Does Unsimplified Data Help? For both the perplexity experiments and the lexical simplification experiments, utilizing additional normal data resulted in large performance improvements; using all of the simple data available, performance is still significantly improved when combined with normal data. In this section, we investigate why the additional normal data is beneficial for simple language modeling. 6.1 More n-grams Intuitively, adding normal data provides additional English data to train on. Most language models are 1542 Perplexity test data Lexical simplification simple normal % inc. simple normal % inc. 1-grams 0.85 0.93 9.4% 0.74 0.78 6.2% 2-grams 0.66 0.82 24% 0.34 0.54 56% 3-grams 0.39 0.57 46% 0.088 0.19 117% Table 3: Proportion of n-grams in the test sets that occur in the simple and normal training data sets. trained using a smoothed version of the maximum likelihood estimate for an n-gram. For trigrams, this is: p(a|bc) = count(abc) count(bc) where count(·) is the number of times the n-gram occurs in the training corpus. For interpolated and backoff n-gram models, these counts are smoothed based on the probabilities of lower order n-gram models, which are in-turn calculated based on counts from the corpus. We hypothesize that the key benefit of additional normal data is access to more n-gram counts and therefore better probability estimation, particularly for n-grams in the simple corpus that are unseen or have low frequency. For n-grams that have never been seen before, the normal data provides some estimate from English text. This is particularly important for unigrams (i.e. words) since there is no lower order model to gain information from and most language models assume a uniform prior on unseen words, treating them all equally. For n-grams that have been seen but are rare, the additional normal data can help provide better probability estimates. Because frequencies tend to follow a Zipfian distribution, these rare n-grams make up a large portion of n-grams in real data (Ha et al., 2003). To partially validate this hypothesis, we examined the n-gram overlap between the n-grams in the training data and the n-grams in the test sets from the two tasks. Table 3 shows the percentage of unigrams, bigrams and trigrams from the two test sets that are found in the simple and normal training data. For all n-gram sizes the normal data contained more test set n-grams than the simple data. Even at the unigram level, the normal data contained significantly more of the test set unigrams than the simple data. On the perplexity data set, the 9.4% increase in word occurrence between the simple and normal data set represents an over 50% reduction in the number of out of vocabulary words. For Perplexity test data Lexical simplification simple + % inc. over simple + % inc. over normal normal normal normal 1-grams 0.93 0.2% 0.78 0.0% 2-grams 0.83 0.8% 0.54 1.1% 3-grams 0.58 2.5% 0.20 2.6% Table 4: Proportion of n-grams in the test sets that occur in the combination of both the simple and normal data. larger n-grams, the difference between the simple and normal data sets are even more pronounced. On the lexical simplification data the normal data contained more than twice as many test trigrams as the simple data. These additional n-grams allow for better probability estimates on the test data and therefore better performance on the two tasks. 6.2 The Role of Normal Data Estimation of rare events is one component of language model performance, but other factors also impact performance. Table 4 shows the test set n-gram overlap on the combined data set of simple and normal data. Because the simple and normal data come from the same content areas, the simple data provides little additional coverage if the normal data is already used. For example, adding the simple data to the normal data only increases the number of seen unigrams by 0.2%, representing only about 600 new words. However, the experiments above showed the combined models performed much better than models trained only on normal data. This discrepancy highlights the key problem with normal data: it is out-of-domain data. While it shares some characteristics with the simple data, it represents a different distribution over the language. To make this discrepancy more explicit, we created a sentence aligned data set by aligning the simple and normal articles using the approach from Coster and Kauchak (2011b). This approach has been previously used for aligning English Wikipedia and Simple English Wikipedia with reasonable accuracy. The resulting data set contains 150K aligned simple-normal sentence pairs. Figure 8 shows the perplexity scores for language models trained on this data set. Because the data is aligned and therefore similar, we see the perplexity curves run parallel to each other as more data is added. However, even though these 1543 100 150 200 250 300 25K 50K 75K 100K 125K 150K perplexity number of sentences simple-only-aligned normal-only-aligned Figure 8: Language model perplexities for models trained on increasing data sizes for a simplenormal sentence aligned data set. sentences represent the same content, the language use is different between simple and normal and the normal data performs consistently worse. 6.3 A Balance Between Simple and Normal Examining the optimal lambda values for the linearly interpolated models also helps understand the role of the normal data. On the perplexity task, the best perplexity results were obtained with a lambda of 0.5, or an equal weighting between the simple and normal models. Even though the normal data contained six times as many sentences and nine times as many words, the best modeling performance balanced the quality of the simple model with the coverage of the normal model. For the simplification task, the optimal lambda value determined on the development set was 0.98, with a very strong bias towards the simple model. Only when the simple model did not provide differentiation between lexical choices will the normal model play a role in selecting the candidates. For the lexical simplification task, the role of the normal model is even more clear: to handle rare occurrences not covered by the simple model and to smooth the simple model estimates. 7 Conclusions and Future Work In the experiments above we have shown that on two different tasks utilizing additional normal data improves the performance of simple English language models. On the perplexity task, the combined model achieved a performance improvement of 23% over the simple-only model and on the lexical simplification task, the combined model achieved a 24% improvement. These improvements are achieved over a simple-only model that uses all simple English data currently available in this domain. For both tasks, the best improvements were seen when using language model adaptation techniques, however, the adaptation results also indicated that the role of normal data is partially task dependent. On the perplexity task, the best results were achieved with an equal weighting between the simple-only and normal-only model. However, on the lexical simplification task, the best results were achieved with a very strong bias towards the simple-only model. For other simplification tasks, the optimal parameters will need to be investigated. For many of the experiments, combining a smaller amount of simple data (50K-100K sentences) with normal data achieved results that were similar to larger simple data set sizes. For example, on the lexical simplification task, when using a linearly interpolated model, the model combining 100K simple sentences with all the normal data achieved comparable results to the model combining all the simple sentences with all the normal data. This is encouraging for other monolingual domains such as text compression or text simplification in non-English languages where less data is available. There are still a number of open research questions related to simple language modeling. First, further experiments with larger normal data sets are required to understand the limits of adding out-of-domain data. Second, we have only utilized data from Wikipedia for normal text. Many other text sources are available and the impact of not only size, but also of domain should be investigated. Third, it still needs to be determined how language model performance will impact sentence-level and document-level simplification approaches. In machine translation, improved language models have resulted in significant improvements in translation performance (Brants et al., 2007). Finally, in this paper we only investigated linearly interpolated language models. Many other domain adaptations techniques exist and may produce language models with better performance. 1544 References Michiel Bacchiani and Brian Roark. 2003. Unsupervised language model adaptation. In Proceedings of ICASSP. Michele Banko, Vibhu Mittal, and Michael Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of ACL. Jerome R. Bellegarda. 2004. Statistical language model adaptation: Review and perspectives. Speech Communication. Or Biran, Samuel Brody, and Noe ´mie Elhadad. 2011. Putting it simply: A context-aware approach to lexical simplification. In Proceedings of ACL. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of EMNLP. Raman Chandrasekar and Bangalore Srinivas. 1997. Automatic induction of rules for text simplification. Knowledge Based Systems. Stanley Chen, Douglas Beeferman, and Ronald Rosenfeld. 1998. Evaluation metrics for language models. In DARPA Broadcast News Transcription and Understanding Workshop. Trevor Cohn and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research. William Coster and David Kauchak. 2011a. Learning to simplify sentences using Wikipedia. In Proceedings of Text-To-Text Generation. William Coster and David Kauchak. 2011b. Simple English Wikipedia: A new text simplification task. In Proceedings of ACL. Hal Daume and Daniel Marcu. 2002. A noisy-channel model for document compression. In Proceedings of ACL. Christopher Cieri David Graff. 2003. English gigaword. http://www.ldc. upenn.edu/Catalog/CatalogEntry. jsp?catalogId=LDC2003T05. Carsten Eickhoff, Pavel Serdyukov, and Arjen P. de Vries. 2010. Web page classification on child suitability. In Proceedings of CIKM. Michel Galley and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. In Proceedings of HLT-NAACL. Le Quan Ha, E. I. Sicilia-Garcia, Ji Ming, and F. J. Smith. 2003. Extension of Zipf’s law to word and character n-grams for English and Chinese. Computational Linguistics and Chinese Language Processing. Bo-June Hsu. 2007. Generalized linear interpolation of language models. In IEEE Workshop on ASRU. Frederick Jelinek and Robert Mercer. 1980. Interpolated estimation of markov source parameters from sparse data. In Proceedings of the Workshop on Patter Recognition in Practice. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artificial Intelligence. J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics. Gondy Leroy, James E. Endicott, Obay Mouradi, David Kauchak, and Melissa Just. 2012. Improving perceived and actual text difficulty for health information consumers using semi-automated methods. In American Medical Informatics Association (AMIA) Fall Symposium. Courtney Napoles and Mark Dredze. 2010. Learning simple Wikipedia: A cogitation in ascertaining abecedarian language. In Proceedings of HLT/NAACL Workshop on Computation Linguistics and Writing. Tadashi Nomoto. 2009. A comparison of model free versus model intensive approaches to sentence compression. In Proceedings of EMNLP. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering. Ronald Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modeling. Computer, Speech and Language. Lucia Specia, Sujay Kumar Jauhar, and Rada Mihalcea. 2012. Semeval-2012 task 1: English lexical simplification. In Joint Conference on Lexical and Computerational Semantics (*SEM). Lucia Specia. 2010. Translating from complex to simplified sentences. In Proceedings of Computational Processing of the Portuguese Language. Andreas Stolcke. 2002. SRILM - An extensible language modeling toolkit. In Proceedings of ICSLP. Hisami Suzuki and Jianfeng Gao. 2005. A comparative study on language model adaptation techniques. In Proceedings of EMNLP. Jenine Turner and Eugene Charniak. 2005. Supervised and unsupervised learning for sentence compression. In Proceedings of ACL. Ken Urano. 2000. Lexical simplification and elaboration: Sentence comprehension and incidental vocabulary acquisition. Master’s thesis, University of Hawaii. 1545 Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of EMNLP. Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of ACL. Mark Yatskar, Bo Pang, Cristian Danescu-NiculescuMizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia. In Proceedings of NAACL. Bing Zhao, Matthias Eck, and Stephan Vogel. 2004. Language model adaptation for statistical machine translation with structured query models. In Proceedings of COLING. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of ICCL. 1546
2013
151
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1547–1557, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Combining Referring Expression Generation and Surface Realization: A Corpus-Based Investigation of Architectures Sina Zarrieß Jonas Kuhn Institut f¨ur maschinelle Sprachverarbeitung University of Stuttgart, Germany sina.zarriess,[email protected] Abstract We suggest a generation task that integrates discourse-level referring expression generation and sentence-level surface realization. We present a data set of German articles annotated with deep syntax and referents, including some types of implicit referents. Our experiments compare several architectures varying the order of a set of trainable modules. The results suggest that a revision-based pipeline, with intermediate linearization, significantly outperforms standard pipelines or a parallel architecture. 1 Introduction Generating well-formed linguistic utterances from an abstract non-linguistic input involves making a multitude of conceptual, discourse-level as well as sentence-level, lexical and syntactic decisions. Work on rule-based natural language generation (NLG) has explored a number of ways to combine these decisions in an architecture, ranging from integrated systems where all decisions happen jointly (Appelt, 1982) to strictly sequential pipelines (Reiter and Dale, 1997). While integrated or interactive systems typically face issues with efficiency and scalability, they can directly account for interactions between discourse-level planning and linguistic realization. For instance, Rubinoff (1992) mentions Example (1) where the sentence planning component needs to have access to the lexical knowledge that “order” and not “home” can be realized as a verb in English. (1) a. *John homed him with an order. b. John ordered him home. In recent data-driven generation research, the focus has somewhat shifted from full data-to-text systems to approaches that isolate well-defined subproblems from the NLG pipeline. In particular, the tasks of surface realization and referring expression generation (REG) have received increasing attention using a number of available annotated data sets (Belz and Kow, 2010; Belz et al., 2011). While these single-task approaches have given rise to many insights about algorithms and corpus-based modelling for specific phenomena, they can hardly deal with aspects of the architecture and interaction between generation levels. This paper suggests a middle ground between full data-to-text and single-task generation, combining two well-studied NLG problems. We integrate a discourse-level approach to REG with sentence-level surface realization in a data-driven framework. We address this integrated task with a set of components that can be trained on flexible inputs which allows us to systematically explore different ways of arranging the components in a generation architecture. Our main goal is to investigate how different architectural set-ups account for interactions between generation decisions at the level of referring expressions (REs), syntax and word order. Our basic set-up is inspired from the Generating Referring Expressions in Context (GREC) tasks, where candidate REs have to be assigned to instances of a referent in a Wikipedia article (Belz and Kow, 2010). We have created a dataset of German texts with annotations that extend this standard in three substantial ways: (i) our domain consists of articles about robbery events that mainly involve two main referents, a victim and a perpetrator (perp), (ii) annotations include deep and shallow syntactic relations similar to the representations used in (Belz et al., 2011) (iii) annotations include empty referents, as e.g. in passives and nominalizations directing attention to the phenomenon of implicit reference, which is largely understudied in NLG. Figure 1 presents an example for a deep syntax tree with underspecified RE 1547 (Tree) be agent perp mod on pobj trial mod because sub attack agent perp theme victim perp italians two men the two they <empty> victim man a young victim the he <empty> Figure 1: Underspecified tree with RE candidates slots and lists of candidates REs for each referent. Applying a strictly sequential pipeline on our data, we observe incoherent system output that is related to an interaction of generation levels, very similar to the interleaving between sentence planning and lexicalization in Example (1). A pipeline that first inserts REs into the underspecified tree in Figure 1, then generates syntax and finally linearizes, produces inappropriate sentences like (2-a). (2) a. *[The two men]p are on trial because of an attack by [two italians]p on [a young man]v. b. [Two italians]p are on trial because of an attack on [a young man]v. Sentence (2-a) is incoherent because the syntactic surface obscurs the intended meaning that “two italians” and “the two men” refer to the same referent. In order to generate the natural Sentence (2-b), the RE component needs information about linear precedence of the two perp instances and the nominalization of “attack”. These types of interactions between referential and syntactic realization have been thoroughly discussed in theoretical accounts of textual coherence, as e.g. Centering Theory (Grosz et al., 1995). The integrated modelling of REG and surface realization leads to a considerable expansion of the choice space. In a sentence with 3 referents that each have 10 RE candidates and can be freely ordered, the number of surface realizations increases from 6 to 6·103, assuming that the remaining words can not be syntactically varied. Thus, even when the generation problem is restricted to these tasks, a fully integrated architecture faces scalability issues on realistic corpus data. In this work, we assume a modular set-up of the generation system that allows for a flexible ordering of the single components. Our experiments vary 3 parameters of the generation architecture: 1) the sequential order of the modules, 2) parallelization of modules, 3) joint vs. separate modelling of implicit referents. Our results suggest that the interactions between RE and syntax can be modelled in sequential generation architecture where the RE component has access to information about syntactic realization and an approximative, intermediate linearization. Such a system is reminiscent of earlier work in rulebased generation that implements an interactive or revision-based feedback between discourse-level planning and linguistic realisation (Hovy, 1988; Robin, 1993). 2 Related Work Despite the common view of NLG as a pipeline process, it is a well-known problem that highlevel, conceptual knowledge and low-level linguistic knowledge are tightly interleaved (Danlos, 1984; Mellish et al., 2000). In rule-based, strictly sequential generators these interactions can lead to a so-called generation gap, where a downstream module cannot realize a text or sentence plan generated by the preceding modules (Meteer, 1991; Wanner, 1994). For this reason, a number of other architectures has been proposed, see De Smedt et al. (1996) for an overview. For reasons of tractability and scalability, many practical NLG systems still have been designed as sequential pipelines that follow the basic layout of macroplanning-microplanning-linguistic realization (Reiter, 1994; Cahill et al., 1999; Bateman and Zock, 2003). In recent data-driven research on NLG, many single tasks have been addressed with corpusbased methods. For surface realization, the standard set-up is to regenerate from syntactic representations that have been produced for realistic corpus sentences. The first widely known statistical approach by Langkilde and Knight (1998) used language-model n-gram statistics on a word lattice of candidate realisations to guide a ranker. Subsequent work explored ways of exploiting linguistically annotated data for trainable generation models (Ratnaparkhi, 2000; Belz, 2005). Work on data-driven approaches has led to insights about the importance of linguistic features for sentence 1548 linearization decisions (Ringger et al., 2004; Filippova and Strube, 2007; Cahill and Riester, 2009). (Zarrieß et al., 2012) have recently argued that the good performance of these linguistically motivated word order models, which exploit morphosyntactic features of noun phrases (i.e. referents), is related to the fact that these morphosyntactic features implicitly encode a lot of knowledge about the underlying discourse or information structure. A considerable body of REG research has been done in the paradigm established by Dale (1989; 1995). More closely related to our work are approaches in the line of Siddarthan and Copestake (2004) or Belz and Varges (2007) who generate contextually appropriate REs for instances of a referent in a text. Belz and Varges (2007)’s GREC data set includes annotations of implicit subjects in coordinations. Zarrieß et al. (2011) deal with implicit subjects in passives, proposing a set of heuristics for adding these agents to the generation input. Roth and Frank (2012) acquire automatic annotations of implicit roles for the purpose of studying coherence patterns in texts. Implicit referents have also received attention for the analysis of semantic roles (Gerber and Chai, 2010; Ruppenhofer et al., 2010). Statistical methods for data-to-text generation have been explored only recently. Belz (2008) trains a probabilistic CFG to generate weather forecasts, Chen et al. (2010) induce a synchronous grammar to generate sportcaster text. Both address a restricted domain where a direct alignment between units in the non-linguistic representation and the linguistic utterance can be learned. Marciniak and Strube (2005) propose an ILP model for global optimization in a generation task that is decomposed into a set of classifiers. Bohnet et al. (2011) deal with multi-level generation in a statistical framework and in a less restricted domain. They adopt a standard sequential pipeline approach. Recent corpus-based generation approaches faced the problem that existing standard treebank representations for parsing or other analysis tasks do not necessarily fit the needs of generation (Bohnet et al., 2010; Wanner et al., 2012). Zarrieß et al. (2011) discuss the problem of an input representation that is appropriately underspecified for the realistic generation of voice alternations. 3 The Data Set The data set for our generation experiments consists of 200 newspaper articles about robbery events. The articles were extracted from a large German newspaper corpus. A complete example text with RE annotations is given in Figure 2, Table 1 summarizes some data set statistics. 3.1 RE annotation The RE annotations mark explicit and implicit mentions of referents involved in the robbery event described in an article. Explicit mentions are marked as spans on the surface sentence, labeled with the referent’s role and an ID. We annotate the following referential roles: (i) perpetrator (perp), (ii) victim, (iii) source, according to the core roles of the Robbery frame in English FrameNet. We include source since some texts do not mention a particular victim, but rather the location of the robbery (e.g. a bank, a service station). The ID distinguishes referents that have the same role, e.g. “the husband” and the “young family” in Sentences (3-a) and (3-d) in Figure 2. Each RE is linked to its syntactic head. This complies with the GREC data sets, and is also useful for further annotation of the deep syntax level (see Section 3.2). The RE implicit mentions of victim, perp, and source are annotated as attributes of their syntactic heads in the surface sentence. We consider the following types of implicit referents: (i) agents in passives (e.g. “robbed” in (3-a)), (ii) arguments of nominalizations (e.g. “resistance” in (3-e)), (iii) possessives (e.g. “watch” in (3-f)), (iv) missing subjects in coordinations. (e.g. “flee” in (3-f)) The brat tool (Stenetorp et al., 2012) was used for annotation. We had 2 annotators with a computational linguistic background, provided with annotation guidelines. They were trained on a set of 20 texts. We measure a good agreement on another set of 15 texts: the simple pairwise agreement for explicit mentions is 95.14%-96.53% and 78.94%76.92% for implicit mentions.1 3.2 Syntax annotation The syntactic annotation of our data includes two layers: shallow and deep, labeled dependencies, similar to the representation used in surface realization shared tasks (Belz et al., 2011). We use 1Standard measures for the “above chance annotator agreement” are only defined for task where the set of annotated items is pre-defined. 1549 (3) a.     Junge Familie v:0     Young family auf on dem the Heimwegposs:v way homeposs:v ausgeraubtag:p robbedag:p b. Die The Polizei police sucht looks nach for zwei ungepflegt wirkenden jungen M¨annern im Alter von etwa 25 Jahren p:0. two shabby-looking young men of about 25 years . c. Sie p:0 They sollen are said to am on Montag Monday gegen around 20 20 Uhr o’clock     eine junge Familie mit ihrem sieben Monate alten Baby v:0     a young family with their seven month old baby auf on dem the Heimwegposs:v way homeposs:v von from einem a Einkaufsbummel shopping tour ¨uberfallen attacked und and ausgeraubt robbed haben. have. d. Wie As die the Polizei police berichtet, reports, drohten threatened die zwei M¨anner p:0 the two men    dem Ehemann v:1,    the husband    ihn v:1    him zusammenzuschlagen. beat up. e.    Er v:1    He gab gave deshalb therefore    seine v:1    his Brieftasche wallet ohne without Gegenwehrag:v,the:p resistanceag:v,the:p heraus. out. f. Anschließend Afterwards nahmen took    ihm v:1    him die R¨auber p:0 the robbers noch also die the Armbanduhrposs:v watchposs:v ab off und and fl¨uchtetenag:p. fleedag:p. Figure 2: Example text with RE annotations, oval boxes mark victim mentions, square boxes mark perp mentions, heads of implicit arguments are underlined the Bohnet (2010) dependency parser to obtain an automatic annotation of shallow or surface dependencies for the corpus sentences. The deep syntactic dependencies are derived from the shallow layer by a set of hand-written transformation rules. The goal is to link referents to their main predicate in a uniform way, independently of the surface-syntactic realization of the verb. We address passives, nominalizations and possessives corresponding to the contexts where we annotated implicit referents (see above). The transformations are defined as follows: 1. remove auxiliary nodes, verb morphology and finiteness, a tense feature distinguishes past and present, e.g. “haben:AUX ¨uberfallen:VVINF” (have attacked) maps to “¨uberfallen:VV:PAST” (attack:PAST) 2. map subjects in actives and oblique agents in passives to “agents”; objects in actives and subjects in passive to “themes”, e.g. victim/subj was attacked by perp/obl-ag maps to perp/agent attack victim/theme 3. attach particles to verb lemma, e.g. “gab” ... “heraus” in (3-e) is mapped to “herausgeben” (give to) 4. map nominalized to verbal lemmas, their prepositional and genitive arguments to semantic subjects and objects, e.g. attack on victim is mapped to attack victim/theme 5. normalize prenominal and genitive postnominal posessives, e.g. “seine Brieftasche” (his wallet) and “die Brieftasche des Opfers” (the wallet of the victim) map to “die Brieftasche POSS victim” (the wallet of victim), only applies if possessive is an annotated RE Nominalizations are mapped to their verbal base forms on the basis of lexicalized rules for the nominalized lemmas observed in the corpus. The other transformations are defined on the shallow dependency annotation. # sentences 2030 # explicit REs 3208 # implicit REs 1778 # passives 383 # nominalizations 393 # possessives 1150 Table 1: Basic annotation statistics 3.3 Multi-level Representation In the final representation of our data set, we integrate the RE and deep syntax annotation by replacing subtrees corresponding to an RE span. The RE slot in the tree of the sentence is labeled with its referential role and its ID. All RE subtrees for a referent in a text are collected in a candidate list which is initialized with three default REs: (i) a pronoun, (ii) a default nominal (e.g. “the victim”), (iii) the empty RE. In contrast to the GREC data sets, our RE candidates are not represented as the original surface strings, but as non-linearized subtrees. The resulting multi-layer representation for each text is structured as follows: 1. unordered deep trees with RE slots (deepSyn−re) 2. unorderd shallow trees with RE slots (shallowSyn−re) 3. unordered RE subtrees 4. linearized, fully specified surface trees (linSyn+re) 5. alignments between nodes in 1., 2., 4. The generation components in Section 4 also use intermediate layers where REs are inserted into the deep trees (deepSyn+re) or shallow trees (shallowSyn+re). Nodes in unordered trees are deterministically sorted by their : 1. distance to the root, 2. label, 1550 3. PoS tag, 4. lemma. The generation components traverse the nodes in this the order. 4 Generation Systems Our main goal is to investigate different architectures for combined surface realization and referring expression generation. We assume that this task is split into three main modules: a syntax generator, an REG component, and a linearizer. The components are implemented in a way that they can be trained and applied on varying inputs, depending on the pipeline. Section 4.1 describes the basic set-up of our components. Section 4.2 defines the architectures that we will compare in our experiments (Section 5). Section 4.3 presents the implementation of the underlying feature models. 4.1 Components 4.1.1 SYN: Deep to Shallow Syntax For mapping deep to shallow dependency trees, the syntax generator induces a probabilistic tree transformation. The transformations are restricted to verb nodes in the deep tree (possessives are handled in the RE module) and extracted from the alignments between the deep and shallow layer in the training input. As an example, the deep node “attack:VV” aligns to “have:AUX attacked:VVINF”, “attacks:VVFIN”, “the:ART attack:NN on:PRP”. The learner is implemented as a ranking component, trained with SVMrank (Joachims, 2006). During training, each instance of a verb node has one optimal shallow dependency alignment and a set of distractor candidates. During testing, the module has to pick the best shallow candidate according to its feature model. In our crossvalidation set-up (see Section 5), we extract, on average, 374 transformations from the training sets. This set subdivides into nonlexicalized and lexicalized transformations. The mapping rule in (4-a) that simply rewrites the verb underspecified PoS tag to the finite verb tag in the shallow tree illustrates the non-lexicalized case. Most transformation rules (335 out of 374 on average) are lexicalized for a specific verb lemma and mostly transform nominalizations as in rule (4-b) and particles (see Section 3.2). (4) a. (x,lemma,VV,y) →(x,lemma,VVFIN,y) b. (x,¨uberfallen/attack,VV,y) →(x,bei/at,PREP,y), (z, ¨Uberfall/attack,NN,x),(q,der/the,ART,z) The baseline for the verb transformation component is a two-step procedure: 1) pick a lexicalized rule if available for that verb lemma, 2) pick the most frequent transformation. 4.1.2 REG: Realizing Referring Expressions Similar to the syntax component, the REG module is implemented as a ranker that selects surface RE subtrees for a given referential slot in a deep or shallow dependency tree. The candidates for the ranking correspond to the entire set of REs used for that referential role in the original text (see Section 3.1). The basic RE module is a joint model of all RE types, i.e. nominal, pronominal and empty realizations of the referent. For the experiment in Section 5.4, we use an additional separate classifier for implicit referents, also trained with SVMrank. It uses the same feature model as the full ranking component, but learns a binary distinction for implicit or explicit mentions of a referent. The explicit mentions will be passed to the RE ranking component. The baseline for the REG component is defined as follows: if the preceding and the current RE slot are instances of the same referent, realize a pronoun, else realize the longest nominal RE candidate that has not been used in the preceding text. 4.1.3 LIN: Linearization For linearization, we use the state-of-the-art dependency linearizer described in Bohnet et al. (2012). We train the linearizer on an automatically parsed version of the German TIGER treebank (Brants et al., 2002). This version was produced with the dependency parser by Bohnet (2010), trained on the dependency conversion of TIGER by Seeker and Kuhn (2012). 4.2 Architectures Depending on the way the generation components are combined in an architecture, they will have access to different layers of the input representation. The following definitions of architectures recur to the layers introduced in Section 3.3. 4.2.1 First Pipeline The first pipeline corresponds most closely to a standard generation pipeline in the sense of (Reiter and Dale, 1997). REG is carried out prior to surface realization such that the RE component does not have access to surface syntax or word order whereas the SYN component has access to fully specified RE slots. • training 1551 1. train REG: (deepSyn−re, deepSyn+re) 2. train SYN: (deepSyn+re, shallowSyn+re) • prediction 1. apply REG: deepSyn−re →deepSyn+re 2. apply SYN: deepSyn+re →shallowSyn+re 3. linearize: shallowSyn+re →linSyn+re 4.2.2 Second Pipeline In the second pipeline, the order of the RE and SYN component is switched. In this case, REG has access to surface syntax without word order but the surface realization is trained and applied on trees with underspecified RE slots. • training 1. train SYN: (deepSyn−re, shallowSyn−re) 2. train REG: (shallowSyn−re, shallowSyn+re) • prediction 1. apply SYN: deepSyn−re →shallowSyn−re 2. apply REG: shallowSyn−re → shallowSyn+re 3. linearize: shallowSyn+re →linSyn+re 4.2.3 Parallel System A well-known problem with pipeline architectures is the effect of error propagation. In our parallel system, the components are trained independently of each other and applied in parallel on the deep syntactic input with underspecified REs. • training 1. train SYN: (deepSyn−re, shallowSyn−re) 2. train REG: (deepSyn−re, deepSyn+re) • prediction 1. apply REG and SYN: deepSyn−re →shallowSyn+re 2. linearize: shallowSyn+re →linSyn+re 4.2.4 Revision-based System In the revision-based system, the RE component has access to surface syntax and a preliminary linearization, called prelinSyn. In this set-up, we apply the linearizer first on trees with underspecified RE slots. For this step, we insert the default REs for the referent into the respective slots. After REG, the tree is linearized once again. • training 1. train SYN on gold pairs of (deepSyn−re, shallowSyn−re) 2. train REG on gold pairs of (prelinSyn−re, prelinSyn+re) • prediction 1. apply SYN: deepSyn−re →shallowSyn−re 2. linearize: shallowSyn−re →prelinSyn−re 3. apply REG: prelinSyn−re →prelinSyn+re 4. linearize: prelinSyn+re →linSyn+re 4.3 Feature Models The implementation of the feature models is based on a general set of templates for the SYN and REG component. The exact form of the models depends on the input layer of a component in a given architecture. For instance, when SYN is trained on deepSyn−re, the properties of the children nodes are less specific for verbs that have RE slots as their dependents. When the SYN component is trained on deepSyn+re, lemma and POS of the children nodes are always specified. The feature templates for SYN combine properties of the shallow candidate nodes (label, PoS and lemma for top node and its children) with the properties of the instance in the tree: (i) lemma, tense, (ii) sentence is a header, (iii) label, PoS, lemma of mother node, children and grandchildren nodes (iv) number, lemmas of other verbs in the sentence. The feature templates for REG combine properties of the candidate RE (PoS and lemma for top node and its children, length) with properties of the RE slot in the tree: lemma, PoS and labels for the (i) mother node, (ii) grandmother node, (iii) uncle and sibling nodes. Additionally, we implement a small set of global properties of a referent in a text: (i) identity is known, (ii) plural or singular referent, (iii) age is known, and a number of contextual properties capturing the previous referents and their predicted REs: (i) role and realization of the preceding referent, (ii) last mention of the current referent, (iii) realization of the referent in the header. 5 Experiments In this experimental section, we provide a corpusbased evaluation of the generation components and architectures introduced in Section 4. In the following, Section 5.1 presents the details of our evaluation methodology. In Section 5.2, we discuss the first experiment that evaluates the pipeline architectures and the single components on oracle inputs. Section 5.3 describes an experiment which compares the parallel and the revision-based architecture against the pipeline. In Section 5.4, we compare two methods for dealing with the implicit referents in our data. Section 5.5 provides some general discussion of the results. 1552 Sentence overlap SYN Accuracy RE Accuracy Input System BLEU NIST BLEUr String Type String Type Impl deepSyn−re Baseline 42.38 9.9 47.94 35.66 44.81 33.3 36.03 50.43 deepSyn−re 1st pipeline 54.65 11.30 59.95 57.09 68.15 54.61 71.51 84.72 deepSyn−re 2nd pipeline 54.28 11.25 59.62 59.14 68.58 52.24 68.2 82 gold deepSyn+re SYN→LIN 63.9 12.7 62.86 60.83 69.74 100 100 100 gold shallowSyn−re REG→LIN 60.57 11.87 68.06 100 100 60.53 75.86 88.86 gold shallowSyn+re LIN 79.17 13.91 72.7 100 100 100 100 100 Table 2: Evaluating pipeline architectures against the baseline and upper bounds 5.1 Evaluation Measures We split our data set into 10 splits of 20 articles. We use one split as the development set, and crossvalidate on the remaining splits. In each case, the downstream modules of the pipeline will be trained on the jackknifed training set. Text normalization: We carry out automatic evaluation calculated on lemmatized text without punctuation, excluding additional effects that would be introduced from a morphology generation component. Measures: First, we use a number of evaluation measures familiar from previous generation shared tasks: 1. BLEU, sentence-level geometric mean of 1- to 4-gram precision, as in (Belz et al., 2011) 2. NIST, sentence-level n-gram overlap weighted in favour of less frequent n-grams, as in (Belz et al., 2011) 3. RE Accuracy on String, proportion of REs selected by the system with a string identical to the RE string in the original corpus, as in (Belz and Kow, 2010) 4. RE Accuracy on Type, proportion of REs selected by the system with an RE type identical to the RE type in the original corpus, as in (Belz and Kow, 2010) Second, we define a number of measures motivated by our specific set-up of the task: 1. BLEUr, sentence-level BLEU computed on postprocessed output where predicted referring expressions for victim and perp are replaced in the sentences (both gold and predicted) by their original role label, this score does not penalize lexical mismatches between corpus and system REs 2. RE Accuracy on Impl, proportion of REs predicted correctly as implicit/non-implicit 3. SYN Accuracy on String, proportion of shallow verb candidates selected by the system with a string identical to the verb string in the original corpus 4. SYN Accuracy on Type, proportion of shallow verb candidates selected by the system with a syntactic category identical to the category in the original corpus 5.2 Pipelines and Upper Bounds The first experiment addresses the first and second pipeline introduced in Section 4.2.1 and 4.2.2. The baseline combines the baseline version of the SYN component (Section 4.1.1) and the REG component (Section 4.1.2) respectively. As we report in Table 2, both pipelines largely outperform the baseline. Otherwise, they obtain very similar scores in all measures with a small, weakly significant tendency for the first pipeline. The only remarkable difference is that the accuracy of the individual components is, in each case, lower when they are applied as the second step in the pipeline. Thus, the RE accuracy suffers from mistakes from the predicted syntax in the same way that the quality of syntax suffers from predicted REs. The three bottom rows in Table 2 report the performance of the individual components and linearization when they are applied to inputs with an REG and SYN oracle, providing upper bounds for the pipelines applied on deepSyn−re. When REG and linearization are applied on shallowSyn−re with gold shallow trees, the BLEU score is lower (60.57) as compared to the system that applies syntax and linearization on deepSyn+re, deep trees with gold REs (BLEU score of 63.9). However, the BLEUr score, which generalizes over lexical RE mismatches, is higher for the REG→LIN components than for SYN→LIN. Moreover, the BLEUr score for the REG→LIN system comes close to the upper bound that applies linearization on linSyn+re, gold shallow trees with gold REs (BLEUr of 72.4), whereas the difference in standard BLEU and NIST is high. This effect indicates that the RE prediction mostly decreases BLEU due to lexical mismatches, whereas the syntax prediction is more likely to have a negative impact on final linearization. The error propagation effects that we find in the first and second pipeline architecture clearly show that decisions at the levels of syntax, reference and word order interact, otherwise their predic1553 Input System BLEU NIST BLEUr deepSyn−re 1st pipeline 54.65 11.30 59.95 deepSyn−re Parallel 54.78 11.30 60.05 deepSyn−re Revision 56.31 11.42 61.30 Table 3: Architecture evaluation tion would not affect each other. In particular, the REG module seems to be affected more seriously, the String Accuracy decreases from 60.53 on gold shallow trees to 52.24 on predicted shallow trees whereas the Verb String Accuracy decreases from 60.83 on gold REs to 57.04 on predicted REs. 5.3 Revision or parallelism? The second experiment compares the first pipeline against the parallel and the revision-based architecture introduced in Section 4.2.3 and 4.2.4. The evaluation in Table 3 shows that the parallel architecture improves only marginally over the pipeline. By contrast, we obtain a clearly significant improvement for the revision-based architecture on all measures. The fact that this architecture significantly improves the BLEU, NIST and the BLEUr score of the parallel system indicates that the REG benefits from the predicted syntax when it is approximatively linearized. The fact that also the BLEUr score improves shows that a higher lexical quality of the REs leads to better final linearizations. Table 4 shows the performance of the REG module on varying input layers, providing a more detailed analysis of the interaction between RE, syntax and word order. In order to produce the deeplinSyn−re layer, deep syntax trees with approximative linearizations, we preprocessed the deep trees by inserting a default surface transformation for the verb nodes. We compare this input for REG against the prelinSyn−re layer used in the revision-based architecture, and the deepSyn−re layer used in the pipeline and the parallel architecture. The REG module benefits from the linearization in the case of deeplinSyn−re and prelinSyn−re, outperforming the component trained applied on the non-linearized deep syntax trees. However, the REG module applied on prelinSyn−re, predicted shallow and linearized trees, clearly outperforms the module applied on deeplinSyn−re. This shows that the RE prediction can actually benefit from the predicted shallow syntax, but only when the predicted trees are approximatively linearized. As an upper bound, we report the performance obtained on RE Accuracy Input System String Type Impl deepSyn−re RE 54.61 71.51 84.72 deeplinSyn−re RE 56.78 72.23 84.71 prelinSyn−re RE 58.81 74.34 86.37 gold linSyn−re RE 68.63 83.63 94.74 Table 4: RE generation from different input layers linSyn−re, gold shallow trees with gold linearizations. This set-up corresponds to the GREC tasks. The gold syntax leads to a huge increase in performance. These results strengthen the evidence from the previous experiment that decisions at the level of syntax, reference and word order are interleaved. A parallel architecture that simply “circumvents” error propagation effects by making decisions independent of each other is not optimal. Instead, the automatic prediction of shallow syntax can positively impact on RE generation if these shallow trees are additionally processed with an approximative linearization step. 5.4 A joint treatment of implicit referents? The previous experiments have pursued a joint approach for modeling implicit referents. The hypothesis for this experiment is that the SYN component and the intermediate linearization in a revision-based architecture could benefit from a separate treatment of implicit referents since verb alternations like passive or nominalization often involve referent deletions. The evaluation in Table 5 provides contradictory results depending on the evaluation measure. For the first pipeline, the system with a separate treatment of implicit referents significantly outperforms the joint system in terms of BLEU. However, the BLEUr score does not improve. In the revision-based architecture, we do not find a clear result for or against a joint modelling approach. The revision-based system with disjoint modelling of implicits shows a slight, non-significant increase in BLEU score. By contrast, the BLEUr score is signficantly better for the joint approach. We experimented with parallelization of syntax generation and prediction of implicit referents in a revision-based system. This has a small positive effect on the BLEUr score and a small negative effect on the plain BLEU and NIST score. These contradictory scores might indicate that the automatic evaluation measures cannot capture all aspects of text quality, an issue that we discuss in the following. 1554 (5) Generated by sequential system: a. Deshalb Therefore gab gave dem T¨ater to the robber    seine    his Brieftasche wallet ohne without daß that     das Opfer    the victim Widerstand resistance leistet shows heraus. out. b. Er He nahm takes anschließend afterwards     dem Opfer    the victim die the Armbanduhr watch ab off und and der T¨ater the robber fl¨uchtete. fleed. (6) Generated by revision-based system: a.     Das Opfer    The victim gibt gave deshalb therefore    seine    his Brieftasche wallet ohne without Widerstand resistance zu to leisten show heraus. out. b. Anschließend Afterwards nahm took der T¨ater the robber     dem Opfer    the victim die the Armbanduhr watch ab off und and fl¨uchtete. fleed. Figure 3: Two automatically generated outputs for the Sentences (3e-f) in Figure 2. Joint System BLEU NIST BLEUr + 1st pipeline 54.65 11.30 59.95 1st pipeline 55.38 11.48 59.52 + Revision 56.31 11.42 61.30 Revision 56.42 11.54 60.52 Parallel+Revision 56.29 11.51 60.63 Table 5: Implicit reference and architectures 5.5 Discussion The results presented in the preceding evaluations consistenly show the tight connections between decisions at the level of reference, syntax and word order. These interactions entail highly interdependent modelling steps: Although there is a direct error propagation effect from predicted verb transformation on RE accuracy, predicted syntax still leads to informative intermediate linearizations that improve the RE prediction. Our optimal generation architecture thus has a sequential setup, where the first linearization step can be seen as an intermediate feedback that is revised in the final linearization. This connects to work in, e.g. (Hovy, 1988; Robin, 1993). In Figure 3, we compare two system outputs for the last two sentences of the text in Figure 2. The output of the sequential system is severely incoherent and would probably be rejected by a human reader: In sentence (5a) the victim subject of an active verb is deleted, and the relation between the possessive and the embedded victim RE is not clear. In sentence (5b) the first conjunct realizes a pronominal perp RE and the second conjunct a nominal perp RE. The output of the revision-based system reads much more natural. This example shows that the extension of the REG problem to texts with more than one main referent (as in the GREC data set) yields interesting inter-sentential interactions that affect textual coherence. We are aware of the fact that our automatic evaluation might only partially render certain effects, especially with respect to textual coherence. It is likely that the BLEU scores do not capture the magnitude of the differences in text quality illustrated by the Examples (5-6). Ultimately, a human evaluation for this task is highly desirable. We leave this for future work since our integrated set-up rises a number of questions with respect to evaluation design. In a preliminary analysis, we noticed the problem that human readers find it difficult to judge discourse-level properties of a text like coherence or naturalness when the generation output is not perfectly grammatical or fluent at the sentence level. 6 Conclusion We have presented a data-driven approach for investigating generation architectures that address discourse-level reference and sentence-level syntax and word order. The data set we created for our experiments basically integrates standards from previous research on REG and surface realization and extends the annotations to further types of implicit referents. Our results show that interactions between the different generation levels are best captured in a sequential, revision-based pipeline where the REG component has access to predictions from the syntax and the linearization module. These empirical findings obtained from experiments with generation architectures have clear connections to theoretical accounts of textual coherence. Acknowledgements This work was supported by the Deutsche Forschungsgemeinschaft (German Research Foundation) in SFB 732 Incremental Specification in Context, project D2. 1555 References Douglas Edmund Appelt. 1982. Planning natural language utterances to satisfy multiple goals. Ph.D. thesis, Stanford, CA, USA. John Bateman and Michael Zock. 2003. Natural Language Generation. In Ruslan Mitkov, editor, The Oxford Handbook of Computational Linguistics. Oxford University Press. Anja Belz and Eric Kow. 2010. The GREC Challenges 2010: overview and evaluation results. In Proc. of the 6th International Natural Language Generation Conference, INLG ’10, pages 219–229, Stroudsburg, PA, USA. Anja Belz and Sebastian Varges. 2007. Generation of repeated references to discourse entities. In Proc. of the 11th European Workshop on Natural Language Generation, ENLG ’07, pages 9–16, Stroudsburg, PA, USA. Association for Computational Linguistics. Anja Belz, Mike White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and evaluation results. In Proc. of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 217–226, Nancy, France, September. Association for Computational Linguistics. Anja Belz. 2005. Statistical generation: Three methods compared and evaluated. In Proc. of the 10th European Workshop on Natural Language Generation, pages 15–23. Anja Belz. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Nat. Lang. Eng., 14(4):431–455, October. Bernd Bohnet, Leo Wanner, Simon Milles, and Alicia Burga. 2010. Broad coverage multilingual deep sentence generation with a stochastic multi-level realizer. In Proc. of the 23rd International Conference on Computational Linguistics, Beijing, China. Bernd Bohnet, Simon Mille, Benoˆıt Favre, and Leo Wanner. 2011. <stumaba >: From deep representation to surface. In Proc. of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 232–235, Nancy, France, September. Bernd Bohnet, Anders Bj¨orkelund, Jonas Kuhn, Wolfgang Seeker, and Sina Zarriess. 2012. Generating non-projective word order in statistical linearization. In Proc. of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 928– 939, Jeju Island, Korea, July. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proc. of the 23rd International Conference on Computational Linguistics, pages 89–97, Beijing, China, August. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER Treebank. In Proc. of the Workshop on Treebanks and Linguistic Theories. Aoife Cahill and Arndt Riester. 2009. Incorporating Information Status into Generation Ranking. In Proc. of the 47th Annual Meeting of the ACL, pages 817–825, Suntec, Singapore, August. Lynne Cahill, Christy Doran, Roger Evans, Chris Mellish, Daniel Paiva, Mike Reape, Donia Scott, and Neil Tipper. 1999. In search of a reference architecture for nlg systems. In Proc. of the European Workshop on Natural Language Generation (EWNLG), pages 77–85. David L. Chen, Joohyun Kim, and Raymond J. Mooney. 2010. Training a multilingual sportscaster: Using perceptual context to learn language. Journal of Artificial Intelligence Research, 37:397–435. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the generation of referring expressions. Cognitive Science, 19(2):233–263. Robert Dale. 1989. Cooking up referring expressions. In Proc. of the 27th Annual Meeting of the Association for Computational Linguistics, pages 68–75, Vancouver, British Columbia, Canada, June. Laurence Danlos. 1984. Conceptual and linguistic decisions in generation. In Proc. of the 10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics, pages 501–504, Stanford, California, USA, July. Koenraad De Smedt, Helmut Horacek, and Michael Zock. 1996. Architectures for natural language generation: Problems and perspectives. In Trends In Natural Language Generation: An Artifical Intelligence Perspective, pages 17–46. Springer-Verlag. Katja Filippova and Michael Strube. 2007. Generating constituent order in german clauses. In Proc. of the 45th Annual Meeting of the Association for Computational Linguistics, Prague, Czech Republic. Matthew Gerber and Joyce Chai. 2010. Beyond nombank: A study of implicit arguments for nominal predicates. In Proc. of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1583–1592, Uppsala, Sweden, July. Barbara J. Grosz, Aravind Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. 1556 Eduard H. Hovy. 1988. Planning coherent multisentential text. In Proc. of the 26th Annual Meeting of the Association for Computational Linguistics, pages 163–169, Buffalo, New York, USA, June. Thorsten Joachims. 2006. Training linear SVMs in linear time. In Proc. of the ACM Conference on Knowledge Discovery and Data Mining (KDD), pages 217–226. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proc. of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 704–710, Montreal, Quebec, Canada, August. Association for Computational Linguistics. Tomasz Marciniak and Michael Strube. 2005. Beyond the pipeline: discrete optimization in nlp. In Proc. of the 9th Conference on Computational Natural Language Learning, CONLL ’05, pages 136– 143, Stroudsburg, PA, USA. Chris Mellish, Roger Evans, Lynne Cahill, Christy Doran, Daniel Paiva, Mike Reape, Donia Scott, and Neil Tipper. 2000. A representation for complex and evolving data dependencies in generation. In Proc. of the 6th Conference on Applied Natural Language Processing, pages 119–126, Seattle, Washington, USA, April. Marie Meteer. 1991. Bridging the generation gap between text planning and linguistic realization. In Computational Intelligence, volume 7 (4). Adwait Ratnaparkhi. 2000. Trainable methods for surface natural language generation. In Proc. of the 1st North American chapter of the Association for Computational Linguistics conference, NAACL 2000, pages 194–201, Stroudsburg, PA, USA. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Nat. Lang. Eng., 3(1):57–87, March. Ehud Reiter. 1994. Has a Consensus NL Generation Architecture Appeared, and is it Psycholinguistically Plausible? pages 163–170. Eric K. Ringger, Michael Gamon, Robert C. Moore, David Rojas, Martine Smets, and Simon CorstonOliver. 2004. Linguistically Informed Statistical Models of Constituent Structure for Ordering in Sentence Realization. In Proc. of the 2004 International Conference on Computational Linguistics, Geneva, Switzerland. Jacques Robin. 1993. A revision-based generation architecture for reporting facts in their historical context. In New Concepts in Natural Language Generation: Planning, Realization and Systems. Frances Pinter, London and, pages 238–265. Pinter Publishers. Michael Roth and Anette Frank. 2012. Aligning predicate argument structures in monolingual comparable texts: A new corpus for a new task. In Proc. of the 1st Joint Conference on Lexical and Computational Semantics (*SEM), Montreal, Canada. Robert Rubinoff. 1992. Integrating text planning and linguistic choice by annotating linguistic structures. In Robert Dale, Eduard H. Hovy, Dietmar R¨osner, and Oliviero Stock, editors, NLG, volume 587 of Lecture Notes in Computer Science, pages 45–56. Springer. Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2010. Semeval-2010 task 10: Linking events and their participants in discourse. In Proc. of the 5th International Workshop on Semantic Evaluation, pages 45–50, Uppsala, Sweden, July. Wolfgang Seeker and Jonas Kuhn. 2012. Making Ellipses Explicit in Dependency Conversion for a German Treebank. In Proc. of the 8th conference on International Language Resources and Evaluation, Istanbul, Turkey, May. Advaith Siddharthan and Ann Copestake. 2004. Generating referring expressions in open domains. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 407–414, Barcelona, Spain, July. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for nlp-assisted text annotation. In Proc. of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102– 107, Avignon, France, April. Leo Wanner, Simon Mille, and Bernd Bohnet. 2012. Towards a surface realization-oriented corpus annotation. In Proc. of the 7th International Natural Language Generation Conference, pages 22–30, Utica, IL, May. Leo Wanner. 1994. Building another bridge over the generation gap. In Proc. of the 7th International Workshop on Natural Language Generation, INLG ’94, pages 137–144, Stroudsburg, PA, USA. Sina Zarrieß, Aoife Cahill, and Jonas Kuhn. 2011. Underspecifying and predicting voice for surface realisation ranking. In Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1007– 1017, Portland, Oregon, USA, June. Sina Zarrieß, Aoife Cahill, and Jonas Kuhn. 2012. To what extent does sentence-internal realisation reflect discourse context? a study on word order. In Proc. of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 767–776, Avignon, France, April. 1557
2013
152
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1558–1567, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Named Entity Recognition using Cross-lingual Resources: Arabic as an Example Kareem Darwish Qatar Computing Research Institute Doha, Qatar [email protected] Abstract Some languages lack large knowledge bases and good discriminative features for Name Entity Recognition (NER) that can generalize to previously unseen named entities. One such language is Arabic, which: a) lacks a capitalization feature; and b) has relatively small knowledge bases, such as Wikipedia. In this work we address both problems by incorporating cross-lingual features and knowledge bases from English using cross-lingual links. We show that such features have a dramatic positive effect on recall. We show the effectiveness of cross-lingual features and resources on a standard dataset as well as on two new test sets that cover both news and microblogs. On the standard dataset, we achieved a 4.1% relative improvement in Fmeasure over the best reported result in the literature. The features led to improvements of 17.1% and 20.5% on the new news and microblogs test sets respectively. 1 Introduction Named Entity Recognition (NER) is essential for a variety of Natural Language Processing (NLP) applications such as information extraction. There has been a fair amount of work on NER for a variety of languages including Arabic. To train an NER system, some of the following feature types are typically used (Benajiba and Rosso, 2008; Nadeau and Sekine, 2009): - Orthographic features: These features include capitalization, punctuation, existence of digits, etc. One of the most effective orthographic features is capitalization in English, which helps NER to generalize to new text of different genres. However, capitalization is not very useful in some languages such as German, and nonexistent in other languages such as Arabic. Further, even in English social media, capitalization may be inconsistent. - Contextual features: Certain words are indicative of the existence of named entities. For example, the word “said” is often preceded by a named entity of type “person” or “organization”. Sequence labeling algorithms (ex. Conditional Random Fields (CRF)) can often identify such indicative words. - Character-level features: These features typically include the leading and trailing letters of words. In some languages, these letters could prefixes and suffixes. Such features can be indicative or counter-indicative of the existence of named entities. For example, a word ending with “ing” is typically not a named entity, while a word ending in “berg” is often a named entity. - Part-of-speech (POS) tags and morphological features: POS tags indicate (or counter-indicate) the possible presence of a named entity at word level or at word sequence level. Morphological features can mostly indicate the absence of named entities. For example, Arabic allows the attachment of pronouns to nouns and verbs. However, pronouns are rarely ever attached to named entities. - Gazetteers: This feature checks the presence of a word or a sequence of words in large lists of named entities. If gazetteers are small, then they would have low coverage, and if they are very large then their entries may be ambiguous. For example, “syntax” may refer to sentence construction or the music band “Syntax”. Typically, a subset of these features are available for different languages. For example, morphological, contextual, and character-level features have been shown to be effective for Arabic NER (Benajiba and Rosso, 2008). However, Arabic lacks indicative orthographic features that generalize to previously unseen named entities. Also, although some 1558 of the Arabic gazetteers that were used for NER were small (Benajiba and Rosso, 2008), there has been efforts to build larger Arabic gazetteers (Attia et al., 2010). Since training and test parts of standard datasets for Arabic NER are drawn from the same genre in relatively close temporal proximity, a named entity recognizer that simply memorizes named entities in the training set generally performs well on such test sets. Thus, the results that are reported in the literature are generally high (AbdulHamid and Darwish, 2010; Benajiba et al., 2008). We illustrate the limited capacity of existing recognizers to generalize to previously unseen named entities using two new test sets that include microblogs as well as news texts that cover local and international politics, economics, health, sports, entertainment, and science. As we will show later, recall is well below 50% for all named entity types on the new test sets. To address this problem, we introduce the use of cross-lingual links between a disadvantaged language, Arabic, and a language with good discriminative features and large resources, English, to improve Arabic NER. We exploit English’s orthographic features, particularly capitalization, as well as Arabic and English Wikipedias, including existing annotations from large knowledge sources such as DBpedia. We also show how to use transliteration mining to improve NER, even when neither language has a capitalization (or similar) feature. The intuition is that if the translation of a word is in fact a transliteration, then the word is likely a named entity. Cross-lingual links are obtained using Wikipedia cross-language links and a large Machine Translation (MT) phrase table that is true cased, where word casing is preserved during training. We show the effectiveness of these new features on a standard dataset as well as two new test sets. The contributions of this paper are as follows: - Using cross-lingual links to exploit orthographic features in other languages. - Employing transliteration mining to improve NER. - Using cross-lingual links to exploit a large knowledge base, namely English DBpedia, to benefit NER. - Introducing two new NER test sets for Arabic that include recent news as well as microblogs. We plan to release these test sets. - Improving over the best reported results in the literature by 4.1% (Abdul-Hamid and Darwish, 2010) by strictly adding cross-lingual features. We also show improvements of 17.1% and 20.5% on the new test sets. The remainder of the paper is organized as follows: Section 2 provides related work; Section 3 describes the baseline system; Section 4 introduces the cross-lingual features and reports on their effectiveness; and Section 5 concludes the paper. 2 Related Work 2.1 Using cross-lingual Features For many NLP tasks, some languages may have significantly more training data, better knowledge resources, or more discriminating features than other languages. If cross-lingual resources are available, such as parallel data, increased training data, better resources, or superior features can be used to improve the processing (ex. tagging) for other languages (Ganchev et al., 2009; Shi et al., 2010; Yarowsky and Ngai, 2001). Some work has attempted to use bilingual features in NER. Burkett et al. (2010) used bilingual text to improve monolingual models including NER models for German, which lacks a good capitalization feature. They did so by training a bilingual model and then generating more training data from unlabeled parallel data. They showed significant improvement in German NER effectiveness, particularly for recall. In our work, there is no need for tagged text that has a parallel equivalent in another language. Benajiba et al. (2008) used an Arabic English dictionary from MADA, an Arabic analyzer, to indicate if a word is capitalized in English or not. They reported that it was the second most discriminating feature that they used. However, there seems to be room for improvement because: (1) MADA’s dictionary is relatively small and would have low coverage; and (2) the use of such a binary feature is problematic, because Arabic names are often common Arabic words and hence a word may be translated as a named entity and as a common word. To overcome these two problems, we use cross-lingual features to improve NER using large bilingual resources, and we incorporate confidences to avoid having a binary feature. Richman and Schone (2008) used English linguis1559 tic tools and cross language links in Wikipedia to automatically annotate text in different languages. Transliteration Mining (TM) has been used to enrich MT phrase tables or to improve cross language search (Udupa et al., 2009). Conversely, people have used NER to determine if a word is to be transliterated or not (Hermjakob et al., 2008). However, we are not aware of any prior work on using TM to determine if a sequence is a NE. Further, we are not aware of prior work on using TM (or transliteration in general) as a cross lingual feature in any annotation task. In our work, we use state-of-the-art TM as described by El-Kahki et al. (2011) 2.2 Arabic NER Much work has been done on NER with multiple public evaluation forums. Nadeau and Sekine (Nadeau and Sekine, 2009) surveyed lots of work on NER for a variety of languages. Significant work has been conducted by Benajiba and colleagues on Arabic NER (Benajiba and Rosso, 2008; Benajiba et al., 2008; Benajiba and Rosso, 2007; Benajiba et al., 2007). Benajiba et al. (2007) used a maximum entropy classifier trained on a feature set that includes the use of gazetteers and a stopword list, appearance of a NE in the training set, leading and trailing word bigrams, and the tag of the previous word. They reported 80%, 37%, and 47% F-measure for locations, organizations, and persons respectively on the ANERCORP dataset that they created and publicly released. Benajiba and Rosso (2007) improved their system by incorporating POS tags to improve NE boundary detection. They reported 87%, 46%, and 52% F-measure for locations, organizations, and persons respectively. Benajiba and Rosso (2008) used CRF sequence labeling and incorporated many language specific features, namely POS tagging, base-phrase chunking, Arabic tokenization, and adjectives indicating nationality. They reported that tokenization generally improved recall. Using POS tagging generally improved recall at the expense of precision, leading to overall improvements in F-measure. Using all their suggested features, they reported 90%, 66%, and 73% F-measure for location, organization, and persons respectively. In Benajiba et al. (2008), they examined the same feature set on the Automatic Content Extraction (ACE) datasets using CRF sequence labeling and a Support Vector Machine (SVM) classifier. They did not report per category F-measure, but they reported overall 81%, 75%, and 78% macro-average F-measure for broadcast news and newswire on the ACE 2003, 2004, and 2005 datasets respectively. Huang (2005) used an HMMbased NE recognizer for Arabic and reported 77% F-measure on the ACE 2003 dataset. Farber et al. (2008) used POS tags obtained from an Arabic tagger to enhance NER. They reported 70% Fmeasure on the ACE 2005 dataset. Shaalan and Raza (2007) reported on a rule-based system that uses hand crafted grammars and regular expressions in conjunction with gazetteers. They reported upwards of 93% F-measure, but they conducted their experiments on non-standard datasets, making comparison difficult. Abdul-Hamid and Darwish (2010) used a simplified feature set that relied primarily on character level features, namely leading and trailing letters in a word. They also experimented with a variety of phrase level features with little success. They reported an F-measure of 76% and 81% for the ACE2005 and the ANERCorp datasets datasets respectively. We used their simplified features in our baseline system. The different experiments reported in the literature may not have been done on the same training/test splits. Thus, the results may not be completely comparable. Mohit et al. (2012) performed NER on a different genre from news, namely Arabic Wikipedia articles, and reported recall values as low as 35.6%. They used self training and recall oriented classification to improve recall, typically at the expense of precision. McNamee and Mayfield (2002) and Mayfield et al. (2003) used thousands of language independent features such as character n-grams, capitalization, word length, and position in a sentence, along with language dependent features such as POS tags and BP chunking. The use of CRF sequence labeling for NER has shown success (McCallum and Li, 2003; Nadeau and Sekine, 2009; Benajiba and Rosso, 2008). 3 Baseline Arabic NER System For the baseline system, we used the CRF++1 implementation of CRF sequence labeling with default parameters. We opted to reimplement the most suc1http://code.google.com/p/crfpp/ 1560 cessful features that were reported by Benajiba et al. (2008) and Abdul-Hamid and Darwish (2010), namely the leading and trailing 1, 2, 3, and 4 letters in a word; whether a word appears in the gazetteer that was created by Benajiba et al. (2008), which is publicly available, but is rather small (less than 5,000 entries); and the stemmed form of words (after removing coordinating conjunctions, prepositions, and determiners using a rule-based stemmer akin to (Larkey et al., 2002)). As mentioned earlier, the leading and trailing letters in a word may indicate or counter-indicate the presence of named entities. Stemming is important due to the morphological complexity of Arabic. We used the previous and the next words in their raw and stemmed forms as features. For training and testing, we used the ANERCORP dataset (Benajiba and Rosso, 2007). The dataset has approximately 150k tokens and we used the 80/20 training/test splits of Abdul-Hamid and Darwish (2010), who graciously provided us with their splits of the collection and they achieved the best reported results on the dataset. We will refer to their results, which are provided in Table 1, as “baseline-lit”. Table 2 (a) shows our results on the ANERCORP dataset. Our results were slightly lower than their results (Abdul-Hamid and Darwish, 2010). It is noteworthy that 69% of the named entities in the test part were seen during training. We also created two new test sets. The first test set is composed of news snippets from the RSS feed of the Arabic (Egypt) version of news.google.com from Oct. 6, 2012. The RSS feed contains the headline and the first 50-100 words in the news articles. The set has news from over a dozen different news sources and covers international and local news, politics, financial news, health, sports, entertainment, and technology. This set contains roughly 15k tokens. The second set contains a set of 1,423 tweets that were randomly selected from tweets authored between November 23, 2011 and November 27, 2011. We scraped tweets from Twitter using the query “lang:ar” (language=Arabic). This set contains approximately 26k tokens. The test sets will be henceforth be referred to as the NEWS and TWEETS sets respectively. They were annotated by one person, a native Arabic speaker, using the Linguistics Data Consortium tagging guidelines. Table 2 (b) and (c) report on the results for the baseline system on both test sets. The results on the NEWS test are substantially lower than those for ANERCORP. It is worth noting that only 27% of the named entities in the NEWS test set were observed in the training set (compared to 69% for ANERCORP). As Table 3 shows for the ANERCORP dataset, using only the tokens as features, where the labeler mainly memorizes previously seen named entities, yields higher results than the baseline results for the NEWS dataset (Table 2 (b)). The results on the TWEETS test are very poor, with 24% of the named entities in the test set appearing in the training set. ANERCORP Dataset Precision Recall Fβ=1 LOC 93 83 88 ORG 84 64 73 PERS 90 75 82 Overall 89 74 81 Table 1: “Baseline-lit” Results from (Abdul-Hamid and Darwish, 2010) (a) ANERCORP Dataset Precision Recall Fβ=1 LOC 93.6 83.3 88.1 ORG 83.8 61.2 70.8 PERS 84.3 64.4 73.0 Overall 88.9 72.5 79.9 (b) NEWS Test Set Precision Recall Fβ=1 LOC 84.1 53.2 65.1 ORG 73.2 23.2 35.2 PERS 74.8 47.1 57.8 Overall 78.0 41.9 54.6 (c) TWEETS Test Set Precision Recall Fβ=1 LOC 79.9 27.1 40.4 ORG 44.4 9.1 15.1 PERS 45.7 27.8 34.5 Overall 58.0 23.1 33.1 Table 2: Baseline Results for the three test sets ANERCORP Dataset Precision Recall Fβ=1 LOC 95.3 62.7 75.6 ORG 86.3 44.7 58.9 PERS 85.4 36.4 51.0 Overall 91.0 50.0 64.5 Table 3: Results of using only tokens as features on ANERCORP 1561 4 Cross-lingual Features We experimented with three different cross-lingual features that used Arabic and English Wikipedia cross-language links and a true-cased phrase table that was generated using Moses (Koehn et al., 2007). True-casing preserves case information during training. We used the Arabic Wikipedia snapshot from September 28, 2012. The snapshot has 348,873 titles including redirects, which are alternative names to articles. Of these articles, 254,145 have cross-lingual links to English Wikipedia. We used DBpedia 3.8 which includes 6,157,591 entries of Wikipedia titles and their “types”, such as “person”, “plant”, or “device”, where a title can have multiple types. The phrase table was trained on a set of 3.69 million parallel sentences containing 123.4 million English tokens. The sentences were drawn from the UN parallel data along with a variety of parallel news data from LDC and the GALE project. The Arabic side was stemmed (by removing just prefixes) using the Stanford word segmenter (Green and DeNero, 2012). 4.1 Cross-lingual Capitalization As we mentioned earlier, Arabic lacks capitalization and Arabic names are often common Arabic words. For example, the Arabic name “Hasan” means good. To capture cross-lingual capitalization, we used the aforementioned true-cased phrase table at word and phrase levels as follows: Input: True-cased phrase table PT, sentence S containing n words w0..n, max sequence length l, translations T1..k..m of wi..j for i = 0 →n do j = min(i + l −1, n) if PT contains wi..j & ∃Tk isCaps then weight(wi..j) = P Tk isCaps P (Tk) P Tk isCaps P (Tk)+ P Tk notCaps P (Tk) round weight(wi..j) to first significant figure set tag of wi = B-CAPS-weight set tag for words wi+1..j = I-CAPS-weight else if j > i then j- else tag of wi = null end if end if end for Where: PT was the aforementioned phrase table; l = 4; P(Tk) equaled to the product of p(source|target) and p(target|source) for a word sequence; isCaps and notCaps were whether the translation was capitalized or not respectively; and the weights were binned because CRF++ only takes nominal features. In essence we tried every subsequence of S of length l or less to see if the translation was capitalized. A subsequence can be 1 word long. We tried longer sequences first. To determine if the corresponding phrase was capitalized (isCaps), all non-function words on the English side needed to be capitalized. As an example, the phrase ø XAêË@ ¡J jÖÏ@ (meaning ”Pacific Ocean”) was translated to a capitalized phrase 36.7% of the time. Thus, the word ¡J jÖÏ@ was assigned B-CAPS-0.4 and ø XAêË@ was assigned I-CAPS-0.4. Using weights avoids using capitalization as a binary feature. Table 4 reports on the results of the baseline system with the capitalization feature on the three datasets. In comparing baseline results in Table 2 and cross-lingual capitalization results in Table 4, recall consistently increased for all datasets, particularly for “persons” and “locations”. For the different test sets, recall increased by 3.1 to 6.1 points (absolute) or by 8.4% to 13.6% (relative). This led to an overall improvement in F-measure of 1.8 to 3.4 points (absolute) or 4.2% to 5.7% (relative). Precision dropped overall on the ANERCORP dataset and dropped substantially for the NEWS and TWEETS test sets. (a) ANERCORP Dataset Precision Recall Fβ=1 LOC 92.0/-1.6/-1.7 86.8/3.5/4.2 89.3/1.2/1.4 ORG 82.8/-1.1/-1.3 63.9/2.7/4.4 72.1/1.4/1.9 PERS 86.0/1.7/2.0 75.4/11.0/17.1 80.3/7.3/10.1 Overall 88.4/-0.4/-0.5 78.6/6.1/8.4 83.2/3.4/4.2 (b) NEWS Test Set Precision Recall Fβ=1 LOC 82.1/-2.0/-2.4 59.0/5.8/11.0 68.7/3.5/5.4 ORG 68.4/-4.9/-6.6 23.2/0.0/0.0 34.6/-0.6/-1.7 PERS 70.7/-4.0/-5.4 55.6/8.4/17.9 62.2/4.4/7.6 Overall 74.5/-3.5/-4.5 47.0/5.1/12.2 57.7/3.1/5.7 (c) TWEETS Test Set Precision Recall Fβ=1 LOC 76.9/-3.0/-3.7 27.9/0.9/3.2 41.0/0.5/1.4 ORG 44.4/0.0/0.0 10.4/1.3/14.3 16.8/1.8/11.6 PERS 40.0/-5.7/-12.5 35.0/7.3/26.2 37.3/2.8/8.1 Overall 51.8/-6.2/-10.7 26.3/3.1/13.6 34.9/1.8/5.4 Table 4: Results with cross-lingual capitalization with /absolute/relative differences compared to baseline 1562 4.2 Transliteration Mining An alternative to capitalization can be transliteration mining. The intuition is that named entities are often transliterated, particularly the names of locations and persons. This feature is helpful if crosslingual resources do not have capitalization information, or if the “helper” language to be consulted does not have a useful capitalization feature. We performed transliteration mining (aka cognate matching) at word level for each Arabic word against all its possible translations in the phrase table. We used a transliteration miner akin to that of El-Kahki et al. (2011) that was trained using 3,452 parallel Arabic-English transliteration pairs. We aligned the word-pairs at character level using GIZA++ and the phrase extractor and scorer from the Moses machine translation package (Koehn et al., 2007). The alignment produced mappings between English letters sequences and Arabic letter sequences with associated mapping probabilities. Given an Arabic word, we produced all its possible segmentations along with their associated mappings into English letters. We retained valid target sequences that produced translations in the phrase table. Again we used a weight similar to the one for cross-lingual capitalization and we rounded the values of the ratio the significant figure. The weights were computed as: P Tk isT ransliteration P(Tk) P Tk isT ransliteration P(Tk) + P Tk notT ransliteration P(Tk) (1) where P(Tk) is probability of the kth translation of a word in the phrase table. If a word was not found in the phrase table, the feature value was assigned null. For example, if the translations of the word á‚k are “Hasan”, “Hassan”, and “good”, where the first two are transliterations and the last not, then the weight of the word would be: P(Hasan| á‚k) + P(Hassan| á‚k) P(Hasan| á‚k) + P(Hassan| á‚k) + P(good| á‚k) (2) In our experiments, the weight of á‚k was equal to 0.5 (after rounding). Table 5 reports on the results using the baseline system with the transliteration mining feature. Like the capitalization feature, transliteration mining slightly lowered precision – except for the TWEETS test set where the drop in precision was significant – and positively increased recall, leading to an overall improvement in F-measure for all test sets. Overall, F-measure improved by 1.9%, 3.7%, and 3.9% (relative) compared to the baseline for the ANERCORP, NEWS, and TWEETS test sets respectively. The similarity of results between using transliteration mining and word-level cross-lingual capitalization suggests that perhaps they can serve as surrogates. 4.3 Using DBpedia DBpedia2 is a large collaboratively-built knowledge base in which structured information is extracted from Wikipedia (Bizer et al., 2009). DBpedia 3.8, the release we used in this paper, contains 6,157,591 Wikipedia titles belonging to 296 types. Types vary in granularity with each Wikipedia title having one or more type. For example, NASA is assigned the following types: Agent, Organization, and GovernmentAgency. In all, DBpedia includes the names of 764k persons, 573k locations, and 192k organizations. Of the Arabic Wikipedia titles, 254,145 have Wikipedia cross-lingual links to English Wikipedia, and of those English Wikipedia titles, 185,531 have entries in DBpedia. Since Wikipedia titles may have multiple DBpedia types, we opted to keep the most popular type (by count of how many Wikipedia titles are assigned a particular type) for each title, and we disregarded the rest. We also chose not to use the “Agent” and “Work” types because they were highly ambiguous. We found word sequences in the manner described in the pseudocode for crosslingual capitalization. For translation, we generated two features using two translation resources, namely the aforementioned phrase table and ArabicEnglish Wikipedia cross-lingual links. When using the phrase table, we used the most likely translation into English that matches an entry in DBpedia provided that the product of p(source|target) and p(target|source) of translation was above 10−5. We chose the threshold qualitatively using offline experiments. When using Arabic-English Wikipedia cross-lingual links, if an entry was found in the Arabic Wikipedia, we performed a look up in DB2http://dbpedia.org 1563 pedia using the English Wikipedia title that corresponds to the Arabic Wikipedia title. We used Arabic Wikipedia page-redirects to improve coverage. For both features (using the two translation methods), for an Arabic word sequence corresponding to the DBpedia entry, the first word in the sequence was assigned the feature “B-” plus the DBpedia type and subsequent words were assigned the feature “I-” plus the DBpedia type. For example, for éÊË@ H. Qk (meaning “Hezbollah”), the words H. Qk and éÊË@ were assigned “B-Organization” and “IOrganization” respectively. For all other words, the feature was assigned “null”. Using the phrase table for translation likely yielded improved coverage over using Wikipedia cross-lingual links. However, Wikipedia cross-lingual links likely led to higher quality translations, because they were manually curated. Table 6 reports on the results of using the baseline system with the two DBpedia features. Using DBpedia consistently improved precision and recall for named entity types on all test sets, except for a small drop in precision for locations on the ANERCORP dataset and for locations and persons on the TWEETS test set. For the different test sets, improvements in recall ranged between 4.4 and 7.5 points (absolute) or 6.5% and 19.1% (relative). Precision improved by 0.9 and 5.5 points (absolute) or 1.0% and 7.1% (relative) for the ANERCORP and NEWS test sets respectively. Overall improvement in F-measure ranged between 3.2 and 7.6 points (absolute) or 4.1% and 13.9% (relative). 4.4 Putting it All Together Table 7 reports on the results of using all aforementioned cross-lingual features together. Figures 1, 2, and 3 compare the results of the different setups. As the results show, the impact of cross-lingual features on recall were much more pronounced on the NEWS and TWEETS test sets – compared to the ANERCORP dataset. Further, the recall values for the ANERCORP dataset in the baseline experiments were much higher than those for the two other test sets. This confirms our suspicion that the reported values in the literature on the standard datasets are unrealistically high due to the similarity between the training and test sets. Hence, these high effectiveness results may not generalize to other test sets. Of all the cross(a) ANERCORP Dataset Precision Recall Fβ=1 LOC 92.9/-0.7/-0.7 83.5/0.2/0.3 88.0/-0.2/-0.2 ORG 82.9/-0.9/-1.0 61.8/0.6/1.0 70.9/0.1/0.1 PERS 84.5/0.3/0.3 71.9/7.5/11.7 77.7/4.7/6.5 Overall 88.3/-0.5/-0.6 75.5/2.9/4.1 81.4/1.5/1.9 (b) NEWS Test Set Precision Recall Fβ=1 LOC 84.9/0.7/0.9 53.6/0.5/0.9 65.7/0.6/0.9 ORG 67.2/-6.1/-8.3 22.9/-0.3/-1.1 34.2/-1.0/-2.9 PERS 72.8/-1.9/-2.6 55.0/7.8/16.7 62.7/4.8/8.4 Overall 75.9/-2.1/-2.6 45.0/3.1/7.4 56.6/2.0/3.7 (c) TWEETS Test Set Precision Recall Fβ=1 LOC 79.1/-0.8/-1.0 27.1/0.0/0.0 40.3/-0.1/-0.3 ORG 41.8/-2.7/-6.0 9.1/0.0/0.0 14.9/-0.2/-1.1 PERS 40.0/-5.7/-12.5 35.5/7.7/27.8 37.6/3.1/8.8 Overall 51.7/-6.3/-10.9 25.8/2.6/11.3 34.4/1.3/3.9 Table 5: Results with transliteration mining with /absolute/relative differences compared to baseline (a) ANERCORP Dataset Precision Recall Fβ=1 LOC 92.7/-0.9/-0.9 87.1/3.9/4.6 89.9/1.7/1.9 ORG 84.6/0.8/0.9 66.6/5.3/8.7 74.5/3.7/5.3 PERS 87.8/3.6/4.2 69.9/5.5/8.6 77.8/4.8/6.6 Overall 89.8/0.9/1.0 77.2/4.7/6.5 83.0/3.2/4.0 (b) NEWS Test Set Precision Recall Fβ=1 LOC 87.8/3.6/4.3 61.8/8.6/16.2 72.5/7.4/11.3 ORG 76.1/2.9/3.9 30.2/7.0/30.1 43.2/8.0/22.7 PERS 83.2/8.5/11.3 54.2/7.1/15.0 65.7/7.8/13.6 Overall 83.5/5.5/7.1 49.5/7.5/18.0 62.2/7.6/13.9 (c) TWEETS Test Set Precision Recall Fβ=1 LOC 77.4/-2.5/-3.1 30.5/3.5/12.9 43.8/3.4/8.4 ORG 57.0/12.5/28.2 15.9/6.8/75.1 24.8/9.8/64.9 PERS 40.8/-4.9/-10.6 31.7/4.0/14.3 35.7/1.2/3.4 Overall 55.3/-2.6/-4.5 27.5/4.4/19.1 36.8/3.7/11.2 Table 6: Results using DBpedia with /absolute/relative differences compared to baseline lingual features that we experimented with, the use of DBpedia led to improvements in both precision and recall (except for precision on the TWEETS test set). Other cross-lingual features yielded overall improvements in F-measure, mostly due to gains in recall, typically at the expense of precision. Overall, F-measure improved by 5.5%, 17.1%, and 20.5% (relative) compared to the baseline for the ANERCORP, NEWS, and TWEETS test sets respectively. For the ANERCORP test set, our results improved over the baseline-lit results (Abdul-Hamid and Darwish, 2010) by 4.1% (relative). 1564 Figure 1: ANERCORP Dataset Results Figure 2: NEWS Test Set Results When using all the features together, one notable result is that precision dropped significantly for the TWEETS test sets. We examined the output for the TWEETS test set and here are some of the factors that affected precision: - the presence of words that would typically be named entities in news but would generally be regular words in tweets. For example, the Arabic word “Mubarak” is most likely the name of the former Egyptian president in the context of news, but it would most likely mean “blessed”, which is common in expressions of congratulations, in tweets. - the use of dialectic words that may have transliterations or a named entity as the most likely translation into English. For example, the word ú æ… is typically the dialectic version of the Arabic word Zú æ…, meaning something. However, since the MT system that we used was trained on modern standard Arabic, the dialectic word would not appear in training and would typically be translated/transliterated to the name “Che” (as in Che Guevara). - Since tweets are restricted in length, authors frequently use shortened versions of named entities. For example, tweets would mostly have “Morsi” instead of “Mohamed Morsi” and without trigger words such as “Dr.” or “president”. The full version of a name and trigger words are more comFigure 3: TWEETS Test Set Results (a) ANERCORP Dataset Precision Recall Fβ=1 LOC 92.3/-1.3/-1.4 87.8/4.6/5.5 90.0/1.9/2.1 ORG 81.4/-2.4/-2.9 66.0/4.7/7.7 72.9/2.1/3.0 PERS 87.0/2.8/3.3 77.7/13.3/20.7 82.1/9.1/12.5 Overall 88.7/-0.2/-0.2 80.3/7.8/10.7 84.3/4.4/5.5 (b) NEWS Test Set Precision Recall Fβ=1 LOC 85.1/1.0/1.2 64.1/11.0/20.6 73.1/8.0/12.3 ORG 73.8/0.5/0.7 29.4/6.2/26.9 42.1/6.8/19.4 PERS 76.8/2.0/2.7 63.4/16.3/34.5 69.5/11.7/20.2 Overall 79.2/1.2/1.6 53.6/11.6/27.7 63.9/9.4/17.1 (c) TWEETS Test Set Precision Recall Fβ=1 LOC 81.4/1.5/1.8 33.5/6.5/23.9 47.5/7.1/17.4 ORG 52.1/7.6/17.2 16.2/7.1/78.6 24.7/9.6/64.1 PERS 40.5/-5.2/-11.4 39.2/11.5/41.3 39.8/5.3/15.4 Overall 54.4/-3.6/-6.2 31.4/8.3/35.9 39.9/6.8/20.5 Table 7: Results using all the cross-lingual features with /absolute/relative differences compared to baseline mon in news. This same problem was present in the NEWS test set, because it was constructed from an RSS feed, and headlines, which are typically compact, had a higher representation in the test collection. We observed the same phenomenon for organization names. For example, “the Real” refers to “Real Madrid”. Nicknames are also prevalent. For example, “the Land of the two Sanctuaries” refers to “Saudi Arabia”. We believe that this problem can be overcome by introducing new training data that include tweets (or other social text) and performing domain adaptation. New training data would help: identify words and expressions that are common in conversations, account for common dialectic words, and learn a better word transition model. Further, gazetteers that cover shortened versions of names could be helpful as well. 1565 5 Conclusion In this paper, we presented different cross-lingual features that can make use of linguistic properties and knowledge bases of other languages for NER. For translation, we used an MT phrase table and Wikipedia cross-lingual links. We used English as the “helper” language and we exploited the English capitalization feature and an English knowledge base, DBpedia. If the helper language did not have capitalization, then transliteration mining could provide some of the benefit of capitalization. Transliteration mining requires limited amounts of training examples. We believe that the proposed cross-lingual features can be used to help NER for other languages, particularly languages that lack good features that generalize well. For Arabic NER, the new features yielded an improvement of 5.5% over a strong baseline system on a standard dataset, with 10.7% gain in recall and negligible change in precision. We tested on a new news test set, NEWS, which has recent news articles (the same genre as the standard dataset), and indeed NER effectiveness was much lower. For the new NEWS test set, cross-lingual features led to a small increase in precision (1.6%) and a very large improvement in recall (27.7%). This led to a 17.1% improvement in overall F-measure. We also tested NER on the TWEETS test set, where we observed substantial improvements in recall (35.9%). However, precision dropped by 6.2% for the reasons we mentioned earlier. For future work, it would be interesting to apply cross-lingual features to other language pairs and to make use of joint cross-lingual models. Further, we also plan to investigate Arabic NER on social media, particularly microblogs. References A. Abdul-Hamid and K. Darwish. 2010. Simplified Feature Set for Arabic Named Entity Recognition. Proceedings of the 2010 Named Entities Workshop, ACL 2010, pages 110115. Mohammed Attia, Antonio Toral, Lamia Tounsi, Monica Monachini, and Josef van Genabith. 2010. An automatically built named entity lexicon for Arabic. In: LREC 2010 - 7th conference on International Language Resources and Evaluation, 17-23 May 2010, Valletta, Malta. Y. Benajiba, M. Diab, and P. Rosso. 2008. Arabic Named Entity Recognition using Optimized Feature Sets. Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 284293, Honolulu, October 2008. Y. Benajiba and P. Rosso. 2008. Arabic Named Entity Recognition using Conditional Random Fields. In Proc. of Workshop on HLT & NLP within the Arabic World, LREC08. Y. Benajiba, P. Rosso and J. M. Benedi. 2007. ANERsys: An Arabic Named Entity Recognition system based on Maximum Entropy. In Proc. of CICLing2007, Springer-Verlag, LNCS(4394), pp.143-153 Y. Benajiba and P. Rosso. 2007. ANERsys 2.0: Conquering the NER task for the Arabic language by combining the Maximum Entropy with POS-tag information. In Proc. of Workshop on Natural LanguageIndependent Engineering, IICAI-2007. Christian Bizer, Jens Lehmann, Georgi Kobilarov, Sren Auer, Christian Becker, Richard Cyganiak, Sebastian Hellmann. 2009. DBpedia A Crystallization Point for the Web of Data. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, Issue 7, Pages 154165, 2009. D. Burkett, S. Petrov, J. Blitzer, D. Klein. 2010. Learning Better Monolingual Models with Unannotated Bilingual Text. Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 46–54. A. El Kahki, K. Darwish, A. Saad El Din, M. Abd ElWahab and A. Hefny. 2011. Improved Transliteration Mining Using Graph Reinforcement. EMNLP-2011. B. Farber, D. Freitag, N. Habash, and O. Rambow. 2008. Improving NER in Arabic Using a Morphological Tagger. In Proc. of LREC08. K. Ganchev, J. Gillenwater, and B. Taskar. 2009. Dependency grammar induction via bitext projection constraints. In ACL-2009. Spence Green and John DeNero. 2012. A Class-Based Agreement Model for Generating Accurately Inflected Translations. In ACL-2012. Ulf Hermjakob, Kevin Knight, and Hal Daum III. 2008. Name translation in statistical machine translation: Learning when to transliterate. ACL-08: HLT, Pages 389-397. F. Huang. 2005. Multilingual Named Entity Extraction and Translation from Text and Speech. Ph.D. Thesis. Pittsburgh: Carnegie Mellon University. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, Evan Herbst, Moses: Open Source Toolkit 1566 for Statistical Machine Translation, Annual Meeting of the Association for Computational Linguistics (ACL), demonstration session, Prague, Czech Republic, June 2007. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data, In Proc. of ICML, pp.282-289, 2001. Leah S. Larkey, Lisa Ballesteros, and Margaret E. Connell. 2002. Improving stemming for Arabic information retrieval: light stemming and co-occurrence analysis. SIGIR-2002. J. Mayfield, P. McNamee, and C. Piatko. 2003.Named Entity Recognition using Hundreds of Thousands of Features. HLT-NAACL 2003-Volume 4, 2003. A. McCallum and W. Li. 2003. Early Results for Named Entity Recognition with Conditional Random Fields, Features Induction and Web-Enhanced Lexicons. In Proc. Conference on Computational Natural Language Learning. P. McNamee and J. Mayfield. 2002. Entity extraction without language-specific. Proceedings of CoNLL, .2002 Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, Noah A. Smith. 2012. Recall-oriented learning of named entities in Arabic Wikipedia. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012), pp. 162-173. 2012. D. Nadeau and S. Sekine. 2009. A Survey of Named Entity Recognition and Classification. Named Entities: Recognition, Classification and Use, ed. S. Sekine and E. Ranchhod, John Benjamins Publishing Company. Alexander E. Richman and Patrick Schone. 2008. Mining wiki resources for multilingual named entity recognition. Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 2008. K. Shaalan and H. Raza. 2007. Person Name Entity Recognition for Arabic. Proceedings of the 5th Workshop on Important Unresolved Matters, pages 1724, Prague, Czech Republic, June 2007. L. Shi, R. Mihalcea, M. Tian. 2010. Cross Language Text Classification by Model Translation and Semisupervised Learning. Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2010. Raghavendra Udupa, Anton Bakalov, and Abhijit Bhole. 2009. They Are Out There, If You Know Where to Look: Mining Transliterations of OOV Query Terms for Cross-Language Information Retrieval. Advances in Information Retrieval. Pages: 437-448. D. Yarowsky and G. Ngai. 2001. Inducing Multilingual POS Taggers and NP Bracketers via Robust Projection across Aligned Corpora. In NAACL-2001. 1567
2013
153
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1568–1576, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Beam Search for Solving Substitution Ciphers Malte Nuhn and Julian Schamper and Hermann Ney Human Language Technology and Pattern Recognition Computer Science Department, RWTH Aachen University, Aachen, Germany <surname>@cs.rwth-aachen.de Abstract In this paper we address the problem of solving substitution ciphers using a beam search approach. We present a conceptually consistent and easy to implement method that improves the current state of the art for decipherment of substitution ciphers and is able to use high order n-gram language models. We show experiments with 1:1 substitution ciphers in which the guaranteed optimal solution for 3-gram language models has 38.6% decipherment error, while our approach achieves 4.13% decipherment error in a fraction of time by using a 6-gram language model. We also apply our approach to the famous Zodiac-408 cipher and obtain slightly better (and near to optimal) results than previously published. Unlike the previous state-of-the-art approach that uses additional word lists to evaluate possible decipherments, our approach only uses a letterbased 6-gram language model. Furthermore we use our algorithm to solve large vocabulary substitution ciphers and improve the best published decipherment error rate based on the Gigaword corpus of 7.8% to 6.0% error rate. 1 Introduction State-of-the-art statistical machine translation (SMT) systems use large amounts of parallel data to estimate translation models. However, parallel corpora are expensive and not available for every domain. Recently different works have been published that train translation models using only nonparallel data. Although first practical applications of these approaches have been shown, the overall decipherment accuracy of the proposed algorithms is still low. Improving the core decipherment algorithms is an important step for making decipherment techniques useful for practical applications. In this paper we present an effective beam search algorithm which provides high decipherment accuracies while having low computational requirements. The proposed approach allows using high order n-gram language models, is scalable to large vocabulary sizes and can be adjusted to account for a given amount of computational resources. We show significant improvements in decipherment accuracy in a variety of experiments while being computationally more effective than previous published works. 2 Related Work The experiments proposed in this paper touch many of previously published works in the decipherment field. Regarding the decipherment of 1:1 substitution ciphers various works have been published: Most older papers do not use a statistical approach and instead define some heuristic measures for scoring candidate decipherments. Approaches like (Hart, 1994) and (Olson, 2007) use a dictionary to check if a decipherment is useful. (Clark, 1998) defines other suitability measures based on n-gram counts and presents a variety of optimization techniques like simulated annealing, genetic algorithms and tabu search. On the other hand, statistical approaches for 1:1 substitution ciphers were published in the natural language processing community: (Ravi and Knight, 2008) solve 1:1 substitution ciphers optimally by formulating the decipherment problem as an integer linear program (ILP) while (Corlett and Penn, 2010) solve the problem using A∗search. We use our own implementation of these methods to report optimal solutions to 1:1 substitution ci1568 phers for language model orders n = 2 and n = 3. (Ravi and Knight, 2011a) report the first automatic decipherment of the Zodiac-408 cipher. They use a combination of a 3-gram language model and a word dictionary. We run our beam search approach on the same cipher and report better results without using an additional word dictionary—just by using a high order n-gram language model. (Ravi and Knight, 2011b) report experiments on large vocabulary substitution ciphers based on the Transtac corpus. (Dou and Knight, 2012) improve upon these results and provide state-of-the-art results on a large vocabulary word substitution cipher based on the Gigaword corpus. We run our method on the same corpus and report improvements over the state of the art. (Ravi and Knight, 2011b) and (Nuhn et al., 2012) have shown that—even for larger vocabulary sizes—it is possible to learn a full translation model from non-parallel data. Even though this work is currently only able to deal with substitution ciphers, phenomena like reordering, insertions and deletions can in principle be included in our approach. 3 Definitions In the following we will use the machine translation notation and denote the ciphertext with fN 1 = f1 . . . fj . . . fN which consists of cipher tokens fj ∈Vf. We denote the plaintext with eN 1 = e1 . . . ei . . . eN (and its vocabulary Ve respectively). We define e0 = f0 = eN+1 = fN+1 = $ (1) with “$” being a special sentence boundary token. We use the abbreviations V e = Ve ∪{$} and V f respectively. A general substitution cipher uses a table s(e|f) which contains for each cipher token f a probability that the token f is substituted with the plaintext token e. Such a table for substituting cipher tokens {A, B, C, D} with plaintext tokens {a, b, c, d} could for example look like a b c d A 0.1 0.2 0.3 0.4 B 0.4 0.2 0.1 0.3 C 0.4 0.1 0.2 0.3 D 0.3 0.4 0.2 0.1 The 1:1 substitution cipher encrypts a given plaintext into a ciphertext by replacing each plaintext token with a unique substitute: This means that the table s(e|f) contains all zeroes, except for one “1.0” per f ∈Vf and one “1.0” per e ∈Ve. For example the text abadcab would be enciphered to BCBADBC when using the substitution a b c d A 0 0 0 1 B 1 0 0 0 C 0 1 0 0 D 0 0 1 0 In contrast to the 1:1 substitution cipher, the homophonic substitution cipher allows multiple cipher tokens per plaintext token, which means that the table s(e|f) is all zero, except for one “1.0” per f ∈Vf. For example the above plaintext could be enciphered to ABCDECF when using the homophonic substitution a b c d A 1 0 0 0 B 0 1 0 0 C 1 0 0 0 D 0 0 0 1 E 0 0 1 0 F 0 1 0 0 We will use the definition nmax = max e X f s(e|f) (2) to characterize the maximum number of different cipher symbols allowed per plaintext symbol. We formalize the 1:1 substitutions with a bijective function φ : Vf →Ve and homophonic substitutions with a general function φ : Vf →Ve. Following (Corlett and Penn, 2010), we call cipher functions φ, for which not all φ(f)’s are fixed, partial cipher functions . Further, φ′ is said to extend φ, if for all f that are fixed in φ, it holds that f is also fixed in φ′ with φ′(f) = φ(f). 1569 The cardinality of φ counts the number of fixed f’s in φ. When talking about partial cipher functions we use the notation for relations, in which φ ⊆Vf × Ve. For example with φ = {(A, a)} φ′ = {(A, a), (B, b)} it follows that φ ⊆1φ′ and |φ| = 1 |φ′| = 2 φ(A) = a φ′(A) = a φ(B) = undefined φ′(B) = b The general decipherment goal is to obtain a mapping φ such that the probability of the deciphered text is maximal: ˆφ = arg max φ p(φ(f1)φ(f2)φ(f3)...φ(fN)) (3) Here p(. . . ) denotes the language model. Depending on the structure of the language model Equation 3 can be further simplified. 4 Beam Search In this Section we present our beam search approach to solving Equation 3. We first present the general algorithm, containing many higher level functions. We then discuss possible instances of these higher level functions. 4.1 General Algorithm Figure 1 shows the general structure of the beam search algorithm for the decipherment of substitution ciphers. The general idea is to keep track of all partial hypotheses in two arrays Hs and Ht. During search all possible extensions of the partial hypotheses in Hs are generated and scored. Here, the function EXT ORDER chooses which cipher symbol is used next for extension, EXT LIMITS decides which extensions are allowed, and SCORE scores the new partial hypotheses. PRUNE then selects a subset of these hypotheses which are stored to Ht. Afterwards the array Hs is copied to Ht and the search process continues with the updated array Hs. Due to the structure of the algorithm the cardinality of all hypotheses in Hs increases in each step. Thus only hypotheses of the same cardinality 1shorthand notation for φ′ extends φ 1: function BEAM SEARCH(EXT ORDER, EXT LIMITS, PRUNE) 2: init sets Hs, Ht 3: CARDINALITY = 0 4: Hs.ADD((∅, 0)) 5: while CARDINALITY < |Vf| do 6: f = EXT ORDER[CARDINALITY] 7: for all φ ∈Hs do 8: for all e ∈Ve do 9: φ′ := φ ∪{(e, f)} 10: if EXT LIMITS(φ′) then 11: Ht.ADD(φ′,SCORE (φ′)) 12: end if 13: end for 14: end for 15: PRUNE(Ht) 16: CARDINALITY = CARDINALITY + 1 17: Hs = Ht 18: Ht.CLEAR() 19: end while 20: return best scoring cipher function in Hs 21: end function Figure 1: The general structure of the beam search algorithm for decipherment of substitution ciphers. The high level functions SCORE, EXT ORDER, EXT LIMITS and PRUNE are described in Section 4. are compared in the pruning step. When Hs contains full cipher relations, the cipher relation with the maximal score is returned.2 Figure 2 illustrates how the algorithm explores the search space for a homophonic substitution cipher. In the following we show several instances of EXT ORDER, EXT LIMITS, SCORE, and PRUNE. 4.2 Extension Limits (EXT LIMITS) In addition to the implicit constraint of φ being a function Vf →Ve, one might be interested in functions of a specific form: For 1:1 substitution ciphers (EXT LIMITS SIMPLE) φ must fulfill that the number of cipher letters f ∈Vf that map to any e ∈Ve is at most one. Since partial hypotheses violating this condition can never “recover” when being extended, it becomes clear that these partial hypotheses can be left out from search. 2n-best output can be implemented by returning the n best scoring hypotheses in the final array Hs. 1570 ∅ a b c d a b c d a b c d a b c d a b c d ... ... ... ... ... a b c d a b c d a b c d a b c d ... ... ... ... a b c d a b c d a b c d a b c d B C A D Figure 2: Illustration of the search space explored by the beam search algorithm with cipher vocabulary Vf = {A, B, C, D}, plaintext vocabulary Ve = {a, b, c, d}, EXT ORDER = (B, C, A, D), homophonic extension limits (EXT LIMITS HOMOPHONIC) with nmax = 4, and histogram pruning with nkeep = 4. Hypotheses are visualized as nodes in the tree. The x-axis represents the extension order. At each level only those 4 hypotheses that survived the histogram pruning process are extended. Homophonic substitution ciphers can be handled by the beam search algorithm, too. Here the condition that φ must fulfill is that the number of cipher letters f ∈Vf that map to any e ∈Ve is at most nmax (which we will call EXT LIMITS HOMOPHONIC). As soon as this condition is violated, all further extensions will also violate the condition. Thus, these partial hypotheses can be left out. 4.3 Score Estimation (SCORE) The score estimation function needs to predict how good or bad a partial hypothesis (cipher function) might become. We propose simple heuristics that use the n-gram counts rather than the original ciphertext. The following formulas consider the 2-gram case. Equations for higher n-gram orders can be obtained analogously. With Equation 3 in mind, we want to estimate the best possible score N+1 Y j=1 p(φ′(fj)|φ′(fj−1)) (4) which can be obtained by extensions φ′ ⊇φ. By defining counts3 Nff′ = N+1 X i=1 δ(f, fi−1)δ(f′, fi) (5) 3δ denotes the Kronecker delta. we can equivalently use the scores X f,f′∈V f Nff′ log p(φ′(f′)|φ′(f)) (6) Using this formulation it is easy to propose a whole class of heuristics: We only present the simplest heuristic, which we call TRIVIAL HEURISTIC. Its name stems from the fact that it only evaluates those parts of a given φ′ that are already fixed, and thus does not estimate any future costs. Its score is calculated as X f,f′∈φ′ Nff′ log p(φ′(f′)|φ′(f)). (7) Here f, f′ ∈φ′ denotes that f and f′ need to be covered in φ′. This heuristic is optimistic since we implicitly use “0” as estimate for the non fixed parts of the sum, for which Nff′ log p(·|·) ≤0 holds. It should be noted that this heuristic can be implemented very efficiently. Given a partial hypothesis φ with given SCORE(φ) the score of an extension φ′ can be calculated as SCORE(φ′) = SCORE(φ) + NEWLY FIXED(φ, φ′) (8) where NEWLY FIXED only includes scores for n-grams that have been newly fixed in φ′ during the extension step from φ to φ′. 1571 4.4 Extension Order (EXT ORDER) For the choice which ciphertext symbol should be fixed next during search, several possibilities exist: The overall goal is to choose an extension order that leads to an overall low error rate. Intuitively it seems a good idea to first try to decipher higher frequent words rather than the lowest frequent ones. It is also clear that the choice of a good extension order is dependent on the score estimation function SCORE: The extension order should lead to informative scores early on so that misleading hypotheses can be pruned out early. In most of our experiments we will make use of a very simple extension order: HIGHEST UNIGRAM FREQUENCY simply fixes the most frequent symbols first. In case of the Zodiac-408, we use another strategy that we call HIGHEST NGRAM COUNT extension order. In each step it greedily chooses the symbol that will maximize the number of fixed ciphertext n-grams. This strategy is useful because the SCORE function we use is TRIVIAL HEURISTIC, which is not able to provide informative scores if only few full n-grams are fixed. 4.5 Pruning (PRUNE) We propose two pruning methods: HISTOGRAM PRUNING sorts all hypotheses according to their score and then keeps only the best nkeep hypotheses. THRESHOLD PRUNING keeps only those hypotheses φkeep for which SCORE(φkeep) ≥SCORE(φbest) −β (9) holds for a given parameter β ≥0. Even though THRESHOLD PRUNING has the advantage of not needing to sort all hypotheses, it has proven difficult to choose proper values for β. Due to this, all experiments presented in this paper only use HISTOGRAM PRUNING. 5 Iterative Beam Search (Ravi and Knight, 2011b) propose a so called “iterative EM algorithm”. The basic idea is to run a decipherment algorithm—in their case an EM algorithm based approach—on a subset of the vocabulary. After having obtained the results from the restricted vocabulary run, these results are used to initialize a decipherment run with a larger vocabulary. The results from this run will then be used for a further decipherment run with an even larger vocabulary and so on. In our large vocabulary word substitution cipher experiments we iteratively increase the vocabulary from the 1000 most frequent words, until we finally reach the 50000 most frequent words. 6 Experimental Evaluation We conduct experiments on letter based 1:1 substitution ciphers, the homophonic substitution cipher Zodiac-408, and word based 1:1 substitution ciphers. For a given reference mapping φref, we evaluate candidate mappings φ using two error measures: Mapping Error Rate MER(φ, φref) and Symbol Error Rate SER(φ, φref). Roughly speaking, SER reports the fraction of symbols in the deciphered text that are not correct, while MER reports the fraction of incorrect mappings in φ. Given a set of symbols Veval with unigram counts N(v) for v ∈Veval, and the total amount of running symbols Neval = P v∈Veval N(v) we define MER = 1 − X v∈Veval 1 |Veval| · δ(φ(v), φref(v)) (10) SER = 1 − X v∈Veval N(v) Neval · δ(φ(v), φref(v)) (11) Thus the SER can be seen as a weighted form of the MER, emphasizing errors for frequent words. In decipherment experiments, SER will often be lower than MER, since it is often easier to decipher frequent words. 6.1 Letter Substitution Ciphers As ciphertext we use the text of the English Wikipedia article about History4, remove all pictures, tables, and captions, convert all letters to lowercase, and then remove all non-letter and nonspace symbols. This corpus forms the basis for shorter cryptograms of size 2, 4, 8, 16, 32, 64, 128, and 256—of which we generate 50 each. We make sure that these shorter cryptograms do not end or start in the middle of a word. We create the ciphertext using a 1:1 substitution cipher in which we fix the mapping of the space symbol ’ ’. This 4http://en.wikipedia.org/wiki/History 1572 Order Beam MER [%] SER [%] RT [s] 3 10 33.15 25.27 0.01 3 100 12.00 6.95 0.06 3 1k 7.37 3.06 0.53 3 10k 5.10 1.42 5.33 3 100k 4.93 1.31 47.70 3 ∞∗ 4.93 1.31 19 700.00 4 10 55.97 48.19 0.02 4 100 18.15 14.41 0.10 4 1k 5.13 3.42 0.89 4 10k 1.55 1.00 8.57 4 100k 0.39 0.06 81.34 5 10 69.19 60.13 0.02 5 100 35.57 29.02 0.14 5 1k 10.89 8.47 1.29 5 10k 0.38 0.06 11.91 5 100k 0.38 0.06 120.38 6 10 74.65 64.77 0.03 6 100 40.26 33.38 0.17 6 1k 13.53 10.08 1.58 6 10k 2.45 1.28 15.77 6 100k 0.09 0.05 151.85 Table 1: Symbol error rates (SER), Mapping error rates (MER) and runtimes (RT) in dependence of language model order (ORDER) and histogram pruning size (BEAM) for decipherment of letter substitution ciphers of length 128. Runtimes are reported on a single core machine. Results for beam size “∞” were obtained using A∗search. makes our experiments comparable to those conducted in (Ravi and Knight, 2008). Note that fixing the ’ ’ symbol makes the problem much easier: The exact methods show much higher computational demands for lengths beyond 256 letters when not fixing the space symbol. The plaintext language model we use a letter based (Ve = {a, . . . , z, }) language model trained on a subset of the Gigaword corpus (Graff et al., 2007). We use extension limits fitting the 1:1 substitution cipher nmax = 1 and histogram pruning with different beam sizes. For comparison we reimplemented the ILP approach from (Ravi and Knight, 2008) as well as the A∗approach from (Corlett and Penn, 2010). Figure 3 shows the results of our algorithm for different cipher length. We use a beam size of 100k for the 4, 5 and 6-gram case. Most remarkably our 6-gram beam search results are significantly better than all methods presented in the literature. For the cipher length of 32 we obtain a symbol error rate of just 4.1% where the optimal solution (i.e. without search errors) for a 3-gram 2 4 8 16 32 64 128 256 0 10 20 30 40 50 60 70 80 90 100 Cipher Length Symbol Error Rate (%) Exact 2gram Exact 3gram Beam 3gram Beam 4gram Beam 5gram Beam 6gram Figure 3: Symbol error rates for decipherment of letter substitution ciphers of different lengths. Error bars show the 95% confidence interval based on decipherment on 50 different ciphers. Beam search was performed with a beam size of “100k”. language model has a symbol error rate as high as 38.3%. Table 1 shows error rates and runtimes of our algorithm for different beam sizes and language model orders given a fixed ciphertext length of 128 letters. It can be seen that achieving close to optimal results is possible in a fraction of the CPU time needed for the optimal solution: In the 3gram case the optimal solution is found in 1 400th of the time needed using A∗search. It can also be seen that increasing the language model order does not increase the runtime much while providing better results if the beam size is large enough: If the beam size is not large enough, the decipherment accuracy decreases when increasing the language model order: This is because the higher order heuristics do not give reliable scores if only few n-grams are fixed. To summarize: The beam search method is significantly faster and obtains significantly better results than previously published methods. Furthermore it offers a good trade-off between CPU time and decipherment accuracy. 1573 i l i k e k i l l i n g p e o p l e b e c a u s e i t i s s o m u c h f u n i t i n m o r e f u n t h a n k i l l i n g w i l d g a m e i n t h e f o r r e s t b e c a u s e m a n i s t h e m o a t r a n g e r o u e a n a m a l o f a l l t o k i l l s o m e t h i n g g i Figure 4: First 136 letters of the Zodiac-408 cipher and its decipherment. 6.2 Zodiac-408 Cipher As ciphertext we use a transcription of the Zodiac-408 cipher. It consists of 54 different symbols and has a length of 408 symbols.5 The cipher has been deciphered by hand before. It contains some mistakes and ambiguities: For example, it contains misspelled words like forrest (vs. forest), experence (vs. experience), or paradice (vs. paradise). Furthermore, the last 17 letters of the cipher do not form understandable English when applying the same homophonic substitution that deciphers the rest of the cipher. This makes the Zodiac-408 a good candidate for testing the robustness of a decipherment algorithm. We assume a homophonic substitution cipher, even though the cipher is not strictly homophonic: It contains three cipher symbols that correspond to two or more plaintext symbols. We ignore this fact for our experiments, and count—in case of the MER only—the decipherment for these symbols as correct when the obtained mapping is contained in the set of reference symbols. We use extension limits with nmax = 8 and histogram pruning with beam sizes of 10k up to 10M. The plaintext language model is based on the same subset of Gigaword (Graff et al., 2007) data as the experiments for the letter substitution ciphers. However, we first removed all space sym5hence its name Order Beam MER [%] SER [%] RT [s] 4 10k 71.43 67.16 222 4 100k 66.07 61.52 1 460 4 1M 39.29 34.80 12 701 4 10M 19.64 16.18 125 056 5 10k 94.64 96.57 257 5 100k 10.71 5.39 1 706 5 1M 8.93 3.19 14 724 5 10M 8.93 3.19 152 764 6 10k 87.50 84.80 262 6 100k 94.64 94.61 1 992 6 1M 8.93 2.70 17 701 6 10M 7.14 1.96 167 181 Table 2: Symbol error rates (SER), Mapping error rates (MER) and runtimes (RT) in dependence of language model order (ORDER) and histogram pruning size (BEAM) for the decipherment of the Zodiac-408 cipher. Runtimes are reported on a 128-core machine. bols from the training corpus before training the actual letter based 4-gram, 5-gram, and 6-gram language model on it. Other than (Ravi and Knight, 2011a) we do not use any word lists and by that avoid any degrees of freedom in how to integrate it into the search process: Only an n-gram language model is used. Figure 4 shows the first parts of the cipher and our best decipherment. Table 2 shows the results of our algorithm on the Zodiac-408 cipher for different language model orders and pruning settings. To summarize: Our final decipherment—for which we only use a 6-gram language model—has a symbol error rate of only 2.0%, which is slightly better than the best decipherment reported in (Ravi and Knight, 2011a). They used an n-gram language model together with a word dictionary and obtained a symbol error rate of 2.2%. We thus obtain better results with less modeling. 6.3 Word Substitution Ciphers As ciphertext, we use parts of the JRC corpus (Steinberger et al., 2006) and the Gigaword corpus (Graff et al., 2007). While the full JRC corpus contains roughly 180k word types and consists of approximately 70M running words, the full Gigaword corpus contains around 2M word types and roughly 1.5G running words. We run experiments for three different setups: The “JRC” and “Gigaword” setups use the first half of the respective corpus as ciphertext, while the plaintext language model of order n = 3 was 1574 Setup Top MER [%] SER [%] RT [hh:mm] Gigaword 1k 81.91 27.38 03h 10m Gigaword 10k 30.29 8.55 09h 21m Gigaword 20k 21.78 6.51 16h 25m Gigaword 50k 19.40 5.96 49h 02m JRC 1k 73.28 15.42 00h 32m JRC 10k 15.82 2.61 13h 03m JRC-Shuf 1k 76.83 19.04 00h 31m JRC-Shuf 10k 15.08 2.58 13h 03m Table 3: Word error rates (WER), Mapping error rates (MER) and runtimes (RT) for iterative decipherment run on the (TOP) most frequent words. Error rates are evaluated on the full vocabulary. Runtimes are reported on a 128-core machine. trained on the second half. The “JRC-Shuf” setup is created by randomly selecting half of the sentences of the JRC corpus as ciphertext, while the language model was trained on the complementary half of the corpus. We encrypt the ciphertext using a 1:1 substitution cipher on word level, imposing a much larger vocabulary size. We use histogram pruning with a beam size of 128 and use extension limits of nmax = 1. Different to the previous experiments, we use iterative beam search with iterations as shown in Table 3. The results for the Gigaword task are directly comparable to the word substitution experiments presented in (Dou and Knight, 2012). Their final decipherment has a symbol error rate of 7.8%. Our algorithm obtains 6.0% symbol error rate. It should be noted that the improvements of 1.8% symbol error rate correspond to a larger improvement in terms of mapping error rate. This can also be seen when looking at Table 3: An improvement of the symbol error rate from 6.51% to 5.96% corresponds to an improvement of mapping error rate from 21.78% to 19.40%. To summarize: Using our beam search algorithm in an iterative fashion, we are able to improve the state-of-the-art decipherment accuracy for word substitution ciphers. 7 Conclusion We have presented a simple and effective beam search approach to the decipherment problem. We have shown in a variety of experiments—letter substitution ciphers, the Zodiac-408, and word substitution ciphers—that our approach outperforms the current state of the art while being conceptually simpler and keeping computational demands low. We want to note that the presented algorithm is not restricted to 1:1 and homophonic substitution ciphers: It is possible to extend the algorithm to solve n:m mappings. Along with more sophisticated pruning strategies, score estimation functions, and extension orders, this will be left for future research. Acknowledgements This work was partly realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation. Experiments were performed with computing resources granted by JARA-HPC from RWTH Aachen University under project “jara0040”. References Andrew J. Clark. 1998. Optimisation heuristics for cryptology. Ph.D. thesis, Faculty of Information Technology, Queensland University of Technology. Eric Corlett and Gerald Penn. 2010. An exact A* method for deciphering letter-substitution ciphers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1040–1047, Uppsala, Sweden, July. The Association for Computer Linguistics. Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 266–275, Jeju Island, Korea, July. Association for Computational Linguistics. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2007. English Gigaword Third Edition. Linguistic Data Consortium, Philadelphia. George W. Hart. 1994. To decode short cryptograms. Communications of the Association for Computing Machinery (CACM), 37(9):102–108, September. Malte Nuhn, Arne Mauser, and Hermann Ney. 2012. Deciphering foreign language by combining language models and context vectors. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL), pages 156–164, Jeju, Republic of Korea, July. Association for Computational Linguistics. Edwin Olson. 2007. Robust dictionary attack of short simple substitution ciphers. Cryptologia, 31(4):332–342, October. 1575 Sujith Ravi and Kevin Knight. 2008. Attacking decipherment problems optimally with low-order ngram models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 812–819, Honolulu, Hawaii. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2011a. Bayesian inference for Zodiac and other homophonic ciphers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 239–247, Portland, Oregon, June. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2011b. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACLHLT), pages 12–21, Portland, Oregon, USA, June. Association for Computational Linguistics. Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Tomaˇz Erjavec, and Dan Tufis¸. 2006. The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. In In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), pages 2142–2147. European Language Resources Association. 1576
2013
154
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1577–1586, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Social Text Normalization using Contextual Graph Random Walks Hany Hassan Microsoft Research Redmond, WA [email protected] Arul Menezes Microsoft Research Redmond, WA [email protected] Abstract We introduce a social media text normalization system that can be deployed as a preprocessing step for Machine Translation and various NLP applications to handle social media text. The proposed system is based on unsupervised learning of the normalization equivalences from unlabeled text. The proposed approach uses Random Walks on a contextual similarity bipartite graph constructed from n-gram sequences on large unlabeled text corpus. We show that the proposed approach has a very high precision of (92.43) and a reasonable recall of (56.4). When used as a preprocessing step for a state-of-the-art machine translation system, the translation quality on social media text improved by 6%. The proposed approach is domain and language independent and can be deployed as a preprocessing step for any NLP application to handle social media text. 1 Introduction Social Media text is usually very noisy and contains a lot of typos, ad-hoc abbreviations, phonetic substitutions, customized abbreviations and slang language. The social media text is evolving with new entities, words and expressions. Natural language processing and understanding systems such as Machine Translation, Information Extraction and Text-to-Speech are usually trained and optimized for clean data; therefore such systems would face a challenging problem with social media text. Various social media genres developed distinct characteristics. For example, SMS developed a nature of shortening messages to avoid multiple keystrokes. On the other hand, Facebook and instant messaging developed another genre where more emotional expressions and different abbreviations are very common. Somewhere in between, Twitter’s statuses come with some brevity similar to SMS along with the social aspect of Facebook. On the same time, various social media genres share many characteristics and typo styles. For example, repeating letters or punctuation for emphasizing and emotional expression such as ”‘goooood morniiing”’. Using phonetic spelling in a generalized way or to reflect a local accent; such as ”‘wuz up bro”’ (what is up brother). Eliminating vowels such as ”‘cm to c my luv”’. Substituting numbers for letters such as ”‘4get”’ (forget) , ”‘2morrow”’ (tomorrow), and ”‘b4”’ (before). Substituting phonetically similar letters such as ”‘phone”’ (fon). Slang abbreviations which usually abbreviates multi-word expression such as ”‘LMS”’ (like my status) , ”‘idk”’ (i do not know), ”‘rofl”’ (rolling on floor laughing). While social media genres share many characteristics, they have significant differences as well. It is crucial to have a solution for text normalization that can adapt to such variations automatically. We propose a text normalization approach using an unsupervised method to induce normalization equivalences from noisy data which can adapt to any genre of social media. In this paper, we focus on providing a solution for social media text normalization as a preprocessing step for NLP applications. However, this is a challenging problem for several reasons. First, it is not straightforward to define the Out-ofVocabulary (OOV) words. Traditionally, an OOV word is defined as a word that does not exist in the vocabulary of a given system. However, this definition is not adequate for the social media text which has a very dynamic nature. Many words and named entities that do not exist in a given vocabulary should not be considered for normalization. Second, same OOV word may have many 1577 appropriate normalization depending on the context and on the domain. Third, text normalization as a preprocessing step should have very high precision; in other words, it should provide conservative and confident normalization and not overcorrect. Moreover, the text normalization should have high recall, as well, to have a good impact on the NLP applications. In this paper, we introduce a social media text normalization system which addresses the challenges mentioned above. The proposed system is based on constructing a lattice from possible normalization candidates and finding the best normalization sequence according to an n-gram language model using a Viterbi decoder. We propose an unsupervised approach to learn the normalization candidates from unlabeled text data. The proposed approach uses Random Walks on a contextual similarity graph constructed form n-gram sequences on large unlabeled text corpus. The proposed approach is very scalable, accurate and adaptive to any domain and language. We evaluate the approach on the normalization task as well as machine translation task. The rest of this paper is organized as follows: Section(2) discusses the related work, Section(3) introduces the text normalization system and the baseline candidate generators, Section(4) introduces the proposed graph-based lexicon induction approach, Section(5) discusses the experiments and output analysis, and finally Section(6) concludes and discusses future work. 2 Related Work Early work handled the text normalization problem as a noisy channel model where the normalized words go through a noisy channel to produce the noisy text. (Brill and Moore, 2000) introduced an approach for modeling the spelling errors as a noisy channel model based on string to string edits. Using this model gives significant performance improvements compared to previously proposed models. (Toutanova and Moore, 2002) improved the string to string edits model by modeling pronunciation similarities between words. (Choudhury et al., 2007) introduced a supervised HMM channel model for text normalization which has been expanded by (Cook and Stevenson, 2009) to introduce unsupervised noisy channel model using probabilistic models for common abbreviation and various spelling errors types. Some researchers used Statistical Machine Translation approach for text normalization; formalizing the problem as a translation from the noisy forms to the normalized forms. (Aw et al., 2006) proposed an approach for normalizing Short Messaging Service (SMS) texts by translating it into normalized forms using Phrase-based SMT techniques on character level. The main drawback of these approaches is that the noisy channel model cannot accurately represent the errors types without contextual information. More recent approaches tried to handle the text normalization problem using normalization lexicons which map the noisy form of the word to a normalized form. For example, (Han et al., 2011) proposed an approach using a classifier to identify the noisy words candidate for normalization; then using some rules to generate lexical variants and a small normalization lexicon. (Gouws et al., 2011) proposed an approach using an impoverished normalization lexicon based on string and distributional similarity along with a dictionary lookup approach to detect noisy words. More recently, (Han et al., 2012) introduced a similar approach by generating a normalization lexicon based on distributional similarity and string similarity. This approach uses pairwise similarity where any two words that share the same context are considered as normalization equivalences. The pairwise approach has a number of limitations. First, it does not take into account the relative frequencies of the normalization equivalences that might share different contexts. Therefore, the selection of the normalization equivalences is performed on pairwise basis only and is not optimized over the whole data. Secondly, the normalization equivalences must appear in the exact same context to be considered as a normalization candidate. These limitations affect the accuracy and the coverage of the produced lexicon. Our approach also adopts a lexicon based approach for text normalization, we construct a lattice from possible normalization candidates and find the best normalization sequence according to an n-gram language model using a Viterbi decoder. The normalization lexicon is acquired from unlabeled data using random walks on a contextual similarity graph constructed form n-gram sequences on large unlabeled text corpus. Our approach has some similarities with (Han et al., 2012) since both approaches utilize a normaliza1578 tion lexicon acquired form unlabeled data using distributional and string similarities. However, our approach is significantly different since we acquire the lexicon using random walks on a contextual similarity graph which has a number of advantages over the pairwise similarity approach used in (Han et al., 2012). Namely, the acquired normalization equivalence are optimized globally over the whole data, the rare equivalences are not considered as good candidates unless there is a strong statistical evidence across the data, and finally the normalization equivalences may not share the same context. Those are clear advantages over the pairwise similarity approach and result in a lexicon with higher accuracy as well as wider coverage. Those advantages will be clearer when we describe the proposed approach in details and during evaluation and comparison to the pairwise approach. 3 Text Normalization System In this paper, we handle text normalization as a lattice scoring approach, where the translation is performed from noisy text as the source side to the normalized text as the target side. Unlike conventional MT systems, the translation table is not learned from parallel aligned data; instead it is modeled by the graph-based approach of lexicon generation as we will describe later. We construct a lattice from possible normalization candidates and find the best normalization sequence according to an n-gram language model using a Viterbi decoder. In this paper, we restrict the normalization lexicon to one-to-one word mappings, we do not consider multi words mapping for the lexicon induction. To identify OOV candidates for normalization; we restrict proposing normalization candidates to the words that we have in our induced normalization lexicon only. This way, the system would provide more confident and conservative normalization. We move the problem of identifying OOV words to training time; at training time we use soft criteria to identify OOV words. 3.1 Baseline Normalization Candidates Generation We experimented with two normalization candidate generators as baseline systems. The first is a dictionary based spelling correction similar to Aspell1. In this experiment we used the spell checker 1http://aspell.net/ to generate all possible candidates for OOV words and then applied the Viterbi decoder on the constructed lattice to score the best correction candidates using a language model. Our second candidates generator is based on a trie approximate string matching with K errors similar to the approach proposed in (Chang et al., 2010), where K errors can be caused by substitution, insertion, or deletion operations. In our implementation, we customized the errors operations to accommodate the nature of the social media text. Such as lengthening, letter substitution, letter-number substitution and phonetic substitution. This approach overcomes the main problem of the dictionary-based approach which is providing inappropriate normalization candidates to the errors styles in the social media text. As we will show in the experiments in Section(5), dictionary-based normalization methods proved to be inadequate for social media domain normalization for many reasons. First, they provide generic corrections which are inappropriate for social media text. Second, they usually provide corrections with the minimal edit distance for any word or named entity regardless of the nature of the words. Finally, the previous approaches do not take into account the dynamics of the social media text where new terms can be introduced on a daily basis. 4 Normalization Lexicons using Graph-based Random Walks 4.1 Bipartite Graph Representation The main motivation of this approach is that normalization equivalences share similar context; which we call contextual similarity. For instance, assume 5-gram sequences of words, two words may be normalization equivalences if their n-gram context shares the same two words on the left and the same two words on the right. In other words, they are sharing a wild card pattern such as (word 1 word 2 * word 4 word 5). This contextual similarity can be represented as a bipartite graph with the first partite representing the words and the second partite representing the n-gram contexts that may be shared by words. A word node can be either normalized word or noisy word. Identifying if a word is normalized or noisy (candidate for normalization) is crucial since this decision limits the candidate noisy words to be normalized. We adopted a soft criteria for iden1579 C2 making 4 makin 2 mking 1 tkin 1 C3 2 3 C1 taking 1 takin 2 1 C4 1 4 5 Figure 1: Bipartite Graph Representation, left nodes represent contexts, gray right nodes represent the noisy words and white right nodes represent the normalized words. Edge weight is the co-occurrence count of a word and its context. tifying noisy words. A vocabulary is constructed from a large clean corpus. Any word that does not appear in this vocabulary more than a predefined threshold (i.e. 10 times) is considered as a candidate for normalization (noisy word). Figure(1) shows a sample of the bipartite graph G(W, C, E), where noisy words are shown as gray nodes. Algorithm 4.1: CONSTRUCTBIPARTITE(text) comment: Construct Bipartite Graph output (G(W, C, E)) comment: Extract all n-gram sequences Ngrams ←EXTRACTNGRAMS(TextCorpus) for each n ∈Ngrams do                          comment: Check for center word if ISNOISY(CenterWord) W ←ADDSOURCENODE(CenterWord) else W ←ADDABSORBINGNODE(CenterWord) comment: add the context pattern C ←ADD(Context) comment: edge weight E ←ADD(Context, Word, count) The bipartite graph, G(W, C, E), is composed of W which includes all nodes representing normalized words and noisy words, C which includes all nodes representing shared context, and finally E which represents the edges of the graph connecting word nodes and context nodes. The weight on the edge is simply the number of occurrences of a given word in a context. While constructing the graph, we identify if a node represents a noisy word (N) (called source node) or a normalized word (M) (called absorbing node). The bipartite graph is constructed using the procedure in Algorithm(4.1). 4.2 Lexicon generation using Random Walks Our proposed approach uses Markov Random Walks on the bipartite graph in Figure(1) as defined in (Norris, 1997). The main objective is to identify pairs of noisy and normalized words that can be considered as normalization equivalences. In principal, this is similar to using random walks for semi-supervised label propagation which has been introduced in (Szummer and Jaakkola, 2002) and then used in many other applications. For example, (Hughes and Ramage, 2007) used random walks on Wordnet graph to measure lexical semantic relatedness between words. (Das and Petrov, 2011) used graph-based label propagation for cross-lingual knowledge transfers to induce POS tags between two languages. (Minkov and Cohen, 2012) introduced a path constrained graph walk algorithm given a small number of labeled examples to assess nodes relatedness in the graph. In this paper, we apply the label propagation approach to the text normalization problem. Consider a random walk on the bipartite graph G(W, C, E) starting at a noisy word (source node) and ending at a normalized word (absorbing node). The walker starts from any source node Ni belonging to the noisy words then move to any other connected node Mj with probability Pij. The transition between each pair of nodes is defined by a transition probability Pij which represents the normalized probability of the cooccurrence counts of the word and the corresponding context. Though the counts are symmetric, the probability is not symmetric. This is due to the probability normalization which is done according to the nodes connectivity. Therefore, the transition probability between any two nodes i, j is defined as: Pij = Wij/ ∑ ∀k Wik (1) For any non-connected pair of nodes, Pij =0. It is worth noting that due to the bipartite graph representation; any word node, either noisy (source) or normalized (absorbing), is only connected to context nodes and not directly connected to any other word node. 1580 The algorithm repeats independent random walks for K times where the walks traverse the graph randomly according to the transition probability distribution in Eqn(1); each walk starts from the source noisy node and ends at an absorbing normalized node, or consumes the maximum number of steps without hitting an absorbing node. For any random walk the number of steps taken to traverse between any two nodes is called the hitting time (Norris, 1997). Therefore, the hitting time between a noisy and a normalized pair of nodes (n, m) with a walk r is hr(n, m). We define the cost between the two nodes as the average hitting time H(n, m) of all walks that connect those two nodes: H(n, m) = ∑ ∀r hr(n, m)/R (2) Consider the bipartite graph in Figure(1), assume a random walk starting at the source node representing the noisy word ”tkin” then moves to the context node C1 then to the absorbing node representing the normalized word ”taking”. This random walk will associate ”tkin” with ”taking” with a walk of two steps (hits). Another random walk that can connect the two words is [”tkin” →C4 →”takin” →C1 →”taking”], which has 4 steps (hits). In this case, the cost of this pair of nodes is the average number of hits connecting them which is 3. It is worth noting that the random walks are selected according to the transition probability in Eqn(1); therefore, the more probable paths will be picked more frequently. The same pair of nodes can be connected with many walks of various steps (hits), and the same noisy word can be connected to many other normalized words. We define the contextual similarity probability of a normalization equivalence pair n, m as L(n, m). Which is the relative frequency of the average hitting of those two nodes, H(n, m), and all other normalized nodes linked to that noisy word. Thus L(n, m), is calculated as: L(n, m) = H(n, m)/ ∑ i H(n, mi) (3) Furthermore, we add another similarity cost between a noisy word and a normalized word based on the lexical similarity cost, SimCost(n, m), which we will describe in the next section. The final cost associated with a pair is: Cost(n, m) = λ1L(n, m) + λ2SimCost(n, m) (4) Algorithm 4.2: INDUCELEXICON(G) output (Lexicon) INIT((Lexicon)) for each n ∈W ∈G(W, C, E) do                                        comment: for noisy nodes only if ISNOISY(n)                  INIT(Rn) comment: do K random walks for i ←0 to K do Rn ←RANDOMWALK(n) comment: Calculate Avg. hits and normalize Ln ←NORMALIZE(Rn) comment: Calculate Lexical Sim Cost Ln ←SIMCOST(Ln) Ln ←PRUNE(Ln) Lexicon ←ADD(Ln) We used uniform interpolation, both λ1 and λ2 equals 1. The final Lexicon is constructed using those entries and if needed we prune the list to take top N according to the cost above. The algorithm is outlined in 4.2. 4.3 Lexical Similarity Cost We use a similarity function proposed in (Contractor et al., 2010) which is based on Longest Common Subsequence Ratio (LCSR) (Melamed, 1999). This cost function is defined as the ratio of LCSR and Edit distance between two strings as follows: SimCost(n, m) = LCSR(n, m)/ED(n, m) (5) LCSR(n, m) = LCS(n, m)/MaxLenght(n, m) (6) We have modified the Edit Distance calculation ED(n,m) to be more adequate for social media text. The edit distance is calculated between the consonant skeleton of the two words; by removing all vowels, we used Editex edit distance as proposed in (Zobel and Philip, 1996), repetition is reduced to a single letter before calculating the edit distance, and numbers in the middle of words are substituted by their equivalent letters. 5 Experiments 5.1 Training and Evaluation Data We collected large amount of social media data to generate the normalization lexicon using the ran1581 dom walk approach. The data consists of 73 million Twitter statuses. All tweets were collected from March/April 2012 using the Twitter Streaming APIs2. We augmented this data with 50 million sentences of clean data from English LDC Gigaword corpus 3. We combined both data, noisy and clean, together to induce the normalization dictionary from them. While the Gigaword clean data was used to train the language model to score the normalized lattice. We constructed a test set of 1000 sentences of social media which had been corrected by a native human annotator, the main guidelines were to normalize noisy words to its corresponding clean words in a consistent way according to the evidences in the context. We will refer to this test set as SM-Test. Furthermore, we developed a test set for evaluating the effect of the normalization system when used as a preprocessing step for Machine translation. The machine translation test set is composed of 500 sentences of social media English text translated to normalized Spanish text by a bi-lingual translator. 5.2 Evaluating Normalization Lexicon Generation We extracted 5-gram sequences from the combined noisy and clean data; then we limited the space of noisy 5-gram sequences to those which contain only one noisy word as the center word and all other words, representing the context, are not noisy. As we mentioned before, we identify whether the word is noisy or not by looking up a vocabulary list constructed from clean data. In these experiments, the vocabulary is constructed from the Language Model data (50M sentences of the English Gigaword corpus). Any word that appears less than 10 times in this vocabulary is considered noisy and candidate for normalization during the lexicon induction process. It is worth noting that our notion of noisy word does not mean it is an OOV that has to be corrected; instead it indicates that it is candidate for correction but may be opted not to be normalized if there is no confident normalization for it. This helps to maintain the approach as a high precision text normalization system which is highly preferable as an NLP preprocessing step. We constructed a lattice using normalization 2https://dev.twitter.com/docs/streaming-apis 3http://www.ldc.upenn.edu/Catalog/LDC2011T07 candidates and score the best Viterbi path with 5gram language model. We experimented with two candidate generators as baseline systems, namely the dictionary-based spelling correction and the trie approximate match with K errors; where K=3. For both candidate generators the cost function for a given candidate is calculated using the lexical similarity cost in Eqn(5). We compared those approaches with our newly proposed unsupervised normalization lexicon induction; for this case the cost for a candidate is the combined cost of the contextual similarity probability and the lexical similarity cost as defined in Eqn(4). We examine the effect of data size and the steps of the random walks on the accuracy and the coverage of the induced dictionary. We constructed the bipartite graph with the ngram sequences as described in Algorithm 4.1. Then the Random Walks Algorithm in 4.2 is applied with 100 walks. The total number of word nodes is about 7M nodes and the total number of context nodes is about 480M nodes. We used MapReduce framework to implement the proposed technique to handle such large graph. We experimented with the maximum number of random walk steps of 2, 4 and 6; and with different portions of the data as well. Finally, we pruned the lexicon to keep the top 5 candidates per noisy word. Table(1) shows the resulting lexicons from different experiments. Lexicon Lexicon Data Steps Lex1 123K 20M 4 Lex2 281K 73 M 2 Lex3 327K 73M 4 Lex4 363K 73M 6 Table 1: Generated Lexicons, steps are the Random Walks maximum steps. As shown in Table(1), we experimented with different data sizes and steps of the random walks. The more data we have the larger the lexicon we get. Also larger steps increase the induced lexicon size. A random walk step size of 2 means that the noisy/normalized pair shares the same context; while a step size of 4 or more means that they may not share the same context. Next, we will examine the effect of lexicon size on the normalization task. 1582 5.3 Text Normalization Evaluation We experimented different candidate generators and compared it to the unsupervised lexicon approach. Table(2) shows the precision and recall on a the SM-Test set. System Candidates Precision Recall F-Measure Base1 Dict 33.9 15.1 20.98 Base2 Trie 26.64 27.65 27.13 RW1 Lex1 88.76 59.23 71.06 RW2 Lex2 90.66 54.06 67.73 RW3 Lex3 92.43 56.4 70.05 RW4 Lex4 90.87 60.73 72.8 Table 2: Text Normalization with different lexicons In Table(2), the first baseline is using a dictionary based spell checker; which gets low precision and very low recall. Similarly the trie approximate string match is doing a similar job with better recall though the precision is worst. Both of the baseline approaches are inadequate for social media text since both will try to correct any word that is similar to a word in the dictionary. The Trie approximate match is doing better job on the recall since the approximate match is based on phonetic and lexical similarities. On the other hand, the induced normalization lexicon approach is doing much better even with a small amount of data as we can see with system RW1 which uses Lex1 generated from 20M sentences and has 123K lexicon entry. Increasing the amount of training data does impact the performance positively especially the recall. On the other hand, increasing the number of steps has a good impact on the recall as well; but with a considerable impact on the precision. It is clear that increasing the amount of data and keeping the steps limit at ”‘4”’ gives better precision and coverage as well. This is a preferred setting since the main objective of this approach is to have better precision to serve as a reliable preprocessing step for Machine Translation and other NLP applications. 5.4 Comparison with Pairwise Similarity We present experimental results to compare our proposed approach with (Han et al., 2012) which used pairwise contextual similarity to induce a normalization lexicon of 40K entries, we will refer to this lexicon as HB-Dict. We compare the performance of HB-Dict and our induced dictionary (system RW3). We evaluate both system on SMTest test set and on (Han et al., 2012) test set of 548 sentences which we call here HB-Test. System Precision Recall F-Measure SM-Test HB-Dict 71.90 26.30 38.51 RW3 92.43 56.4 70.05 HB-Test HB-Dict 70.0 17.9 26.3 RW3 85.37 56.4 69.93 Table 3: Text Normalization Results As shown in Table(3), RW3 system significantly outperforms HB-Dict system with the lexicon from (Han et al., 2012) on both test sets for both precision and recall. The contextual graph random walks approach helps in providing high precision lexicon since the sampling nature of the approach helps in filtering out unreliable normalization equivalences. The random walks will traverse more frequent paths; which would lead to more probable normalization equivalence. On the other hand, the proposed approach provides high recall as well which is hard to achieve with higher precision. Since the proposed approach deploys random walks to sample paths that can traverse many steps, this relaxes the constraints that the normalization equivalences have to share the same context. Instead a noisy word may share a context with another noisy word which in turn shares a context with a clean equivalent normalization word. Therefore, we end up with a lexicon that have much higher recall than the pairwise similarity approach since it explores equivalences beyond the pairwise relation. Moreover, the random walk sampling emphasis the more frequent paths and hence provides high precision lexicon. 5.5 Output Analysis Table(4) shows some examples of the induced normalization equivalences, the first part shows good examples where vowels are restored and phonetic similar words are matched. Remarkably the correction ”‘viewablity”’ to ”‘visibility”’ is interesting since the system picked the more frequent form. Moreover, the lexicon contains some entries with foreign language words normalized to its English translation. On the other hand, the lexicon has some bad normalization such as ”‘unrecycled ”’ which should be normalized to ”‘non recycled”’ but since the system is limited to one word correction it did not get it. Another interesting bad normalization is ”‘tutting”’ which is new type of 1583 dancing and should not be corrected to ”‘tweeting”’. Noisy Clean Remarks tnght tonight Vowels restored darlin darling g restored urung orange phonetic similarity viewablity visibility good correction unrecycled recycled negation ignored tutting tweeting tutting is dancing type Table 4: Lexicon Samples Table 5 lists a number of examples and their normalization using both Baseline1 and RW3. At the first example, RW3 got the correct normalization as ”interesting” which apparently is not the one with the shortest edit distance, though it is the most frequent candidate at the generated lexicon. The baseline system did not get it right; it got a wrong normalization with shorter edit distance. Example(2) shows the same effect by getting ”cuz” normalized to ”because”. At Example(3), both the baseline and RW3 did not get the correct normalization of ”yur” to ”you are” which is currently a limitation in our system since we only allow one-to-one word mapping in the generated lexicons not one-to-many or many-tomany. At Example(4), RW3 did not normalize ”dure” to ”sure” ; however the baseline normalized it by mistake to ”dare”. This shows a characteristic of the proposed approach; it is very conservative in proposing normalization which is desirable as a preprocessing step for NLP applications. This limitation can be marginalized by providing more data for generating the lexicon. Finally, Example 4 shows also that the system normalize ”gr8” which is mainly due to having a flexible similarity cost during the normalization lexicon construction. 1. Source: Mad abt dt so mch intesting Baseline1: Mad at do so much ingesting RW3: Mad about that so much interesting 2. Source: i’l do cuz ma parnts r ma lyf Baseline1: I’ll do cut ma parents r ma life RW3: I’ll do because my parents are my life 3. Source: yur cuuuuute Baseline1: yur cuuuuute RW3: your cute 4. Source: I’m dure u will get a gr8 score Baseline1: I’m dare you will get a gr8 score RW3: I’m dure you will get a great score Table 5: Normalization Examples 5.6 Machine Translation Task Evaluation The final evaluation of the text normalization system is an extrinsic evaluation where we evaluate the effect of the text normalization task on a social media text translating from English to Spanish using a large scale translation system trained on general domain data. The system is trained on English-Spanish parallel data from WMT 2012 evaluation 4. The data consists of about 5M parallel sentences on news, europal and UN data. The system is a state of the art phrase based system similar to Moses (Hoang et al., 2007). We used The BLEU score (Papineni et al., 2002) to evaluate the translation accuracy with and without the normalization. Table(6) shows the translation evaluation with different systems. The translation with normalization was improved by about 6% from 29.02 to 30.87 using RW3 as a preprocessing step. System BLEU Impreovemnet No Normalization 29.02 0% Baseline1 29.13 0.37% HB-Dict 29.76 3.69% RW3 30.87 6.37% Table 6: Translation Results 6 Conclusion and Future Work We introduced a social media text normalization system that can be deployed as a preprocessor for MT and various NLP applications to handle social media text. The proposed approach is very scalable, adaptive to any domain and language. We show that the proposed unsupervised approach provides a normalization system with very high precision and a reasonable recall. We compared the system with conventional correction approaches and with recent previous work; and we showed that it highly outperforms other systems. Finally, we have used the system as a preprocessing step for a machine translation system which improved the translation quality by 6%. As an extension to this work, we will extend the approach to handle many-to-many normalization pairs; also we plan to apply the approach to more languages. Furthermore, the approach can be easily extended to handle similar problems such as accent restoration and generic entity normalization. 4http://www.statmt.or/wmt12 1584 Acknowledgments We would like to thank Lee Schwartz and Will Lewis for their help in constructing the test sets and in the error analysis. We would also like to thank the anonymous reviewers for their helpful and constructive comments. References AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text normalization. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 3340, Sydney, Australia. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction, In ACL 2000: Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, Englewood Cliffs, NJ, USA. Ye-In Chang and Jiun-Rung Chen and Min-Tze Hsu 2010. A hash trie filter method for approximate string matching in genomic databases Applied Intelligence, 33:1, pages 21:38, Springer US. Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar, and Anupam Basu 2007. Investigation and modeling of the structure of texting language. International Journal of Document Analysis and Recognition, vol. 10, pp. 157:174. Danish Contractor and Tanveer Faruquie and Venkata Subramaniam 2010. Unsupervised cleansing of noisy text. In COLING ’10 Proceedings of the 23rd International Conference on Computational Linguistics, pages 189:196. Paul Cook and Suzanne Stevenson. 2009. An unsupervised model for text message normalization.. In CALC 09: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pages 71:78, Boulder, USA. Dipanjan Das and Slav Petrov 2011 Unsupervised part-of-speech tagging with bilingual graphbased projections Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 600:609, Portland, Oregon Stephan Gouws, Dirk Hovy, and Donald Metzler. 2011. Unsupervised mining of lexical variants from noisy text. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 82:90, Edinburgh, Scotland. Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 368:378, Portland, Oregon, USA. Bo Han and Paul Cook and Timothy Baldwin 2012. Automatically Constructing a Normalisation Dictionary for Microblogs. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012), pages 421:432, Jeju Island, Korea. Hieu Hoang and Alexandra Birch and Chris Callisonburch and Richard Zens and Rwth Aachen and Alexandra Constantin and Marcello Federico and Nicola Bertoldi and Chris Dyer and Brooke Cowan and Wade Shen and Christine Moran and Ondrej Bojar 2007. Moses: Open source toolkit for statistical machine translation. Thad Hughes and Daniel Ramage 2007. Lexical semantic relatedness with random graph walks Proceedings of Conference on Empirical Methods in Natural Language Processing EMNLP, pp. 581589, Prague Fei Liu and Fuliang Weng and Bingqing Wang and Yang Liu 2011. Insertion, Deletion, or Substitution? Normalizing Text Messages without Precategorization nor Supervision Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 19:24, Portland, Oregon Dan Melamed 1999. Bitext Maps and Alignment via Pattern Recognition. In Computational Linguistics, 25, pages 107:130. Einat Minkov and William Cohen Graph Based Similarity Measures for Synonym Extraction from Parsed Text In Proceedings of the TextGraphs workshop 2012 J. Norris 1997. Markov Chains. Cambridge University Press. Kishore Papineni and Salim Roukos and Todd Ward and Wei-jing Zhu 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. in Proceedings of ACL-2002: 40th Annual meeting of the Association for Computational Linguistics. , pages 311:318. Richard Sproat, Alan W. Black, Stanley Chen, Shankar Kumar, Mari Ostendorf, and Christopher Richards. Normalization of non-standard words. 2001. Xu Sun and Jianfeng Gao and Daniel Micol and Chris Quirk 2010. Learning Phrase-Based Spelling Error Models from Clickthrough Data. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 266:274, Sweeden. Martin Szummer and Tommi 2002. Partially labeled classification with markov random walks. In Advances in Neural Information Processing Systems, pages 945:952. 1585 Kristina Toutanova and Robert C. Moore. Pronunciation modeling for improved spelling correction.. 2002. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL , pages 144151, Philadelphia, USA. Justin Zobel and Philip Dart 1996. Phonetic string matching: Lessons from information retrieval. in Proceedings of the Eighteenth ACM SIGIR International Conference on Research and Development in Information Retrieval, pages 166:173, Zurich, Switzerland. 1586
2013
155
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1587–1596, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Integrating Phrase-based Reordering Features into a Chart-based Decoder for Machine Translation ThuyLinh Nguyen Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Stephan Vogel Qatar Computing Research Institute Tornado Tower Doha, Qatar [email protected] Abstract Hiero translation models have two limitations compared to phrase-based models: 1) Limited hypothesis space; 2) No lexicalized reordering model. We propose an extension of Hiero called PhrasalHiero to address Hiero’s second problem. Phrasal-Hiero still has the same hypothesis space as the original Hiero but incorporates a phrase-based distance cost feature and lexicalized reodering features into the chart decoder. The work consists of two parts: 1) for each Hiero translation derivation, find its corresponding discontinuous phrase-based path. 2) Extend the chart decoder to incorporate features from the phrase-based path. We achieve significant improvement over both Hiero and phrase-based baselines for ArabicEnglish, Chinese-English and GermanEnglish translation. 1 Introduction Phrase-based and tree-based translation model are the two main streams in state-of-the-art machine translation. The tree-based translation model, by using a synchronous context-free grammar formalism, can capture longer reordering between source and target language. Yet, tree-based translation often underperforms phrase-based translation in language pairs with short range reordering such as Arabic-English translation (Zollmann et al., 2008; Birch et al., 2009). We follow Koehn et al. (2003) for our phrasebased system and Chiang (2005) for our Hiero system. In both systems, the translation of a source sentence f is the target sentence e∗that maximizes a linear combination of features and weights: ⟨e∗, a∗⟩= argmax ⟨e,a⟩∈H(f) X m∈M λmhm (e, f, a) . (1) where • a is a translation path of f. In the phrasebased system, aph represents a segmentation of e and f and a correspondance of phrases. In the Hiero system, atr is a derivation of a parallel parse tree of f and e, each nonterminal representing a rule in the derivation. • H (f) is the hypothesis space of the sentence f. We denote Hph (f) as the phrase-based hypothesis space of f and Htr (f) as its treebased hypothesis space. Galley and Manning (2010) point out that due to the hard constraints of rule combination, the tree-based system does not have the same excessive hypothesis space as the phrase-based system. • M is the set of feature indexes used in the decoder. Many features are shared between phrase-based and tree-based systems including language model, word count, and translation model features. Phrase-based systems often use a lexical reordering model in addition to the distance cost feature. The biggest difference in a Hiero system and a phrase-based system is in how the reordering is modeled. In the Hiero system, the reordering decision is encoded in weighted translation rules, determined by nonterminal mappings. For example, the rule X →ne X1 pas ; not X1 : w indicates the translation of the phrase between ne and pas to be after the English word not with score w. During decoding, the system parses the source sentence and synchronously generates the target output. To achieve reordering, the phrase-based system translates source phrases out of order. A reordering distance limit is imposed to avoid search space explosion. Most phrase-based systems are equipped with a distance reordering cost feature to tune the system towards the right amount of reordering, but then also a lexicalized reordering 1587 model to model the direction of adjacent source phrases reordering as either monotone, swap or discontinuous. There are two reasons to explain the shortcomings of the current Hiero system: 1. A limited hypothesis space because the synchronous context-free grammar is not applicable to non-projective dependencies. 2. It does not have the expressive lexicalized reordering model and distance cost features of the phrase-based system. When comparing phrase-based and Hiero translation models, most of previous work on treebased translation addresses its limited hypothesis space problem. Huck et al. (2012) add new rules into the Hiero system, Carreras and Collins (2009) apply the tree adjoining grammar formalism to allow highly flexible reordering. On the other hand, the Hiero model has the advantage of capturing long distance and structure reordering. Galley and Manning (2010) extend phrase-based translation by allowing gaps within phrases such as ⟨ne . . . pas, not⟩, so the decoder still has the discriminative reordering features of phrase-based, but also uses on average longer phrases. However, these phrase pairs with gaps do not capture structure reordering as do Hiero rules with nonterminal mappings. For example, the rule X → ne X1 pas ; not X1 explicitly places the translation of the phrase between ne and pas behind the English word not through nonterminal X1. This is important for language pairs with strict reordering. In our Chinese-English experiment, the Hiero system still outperforms the discontinuous phrasebased system. We address the second problem of the original Hiero decoder by mapping Hiero translation derivations to corresponding phrase-based paths, which not only have the same output but also preserve structure distortion of the Hiero translation. We then include phrase-based features into the Hiero decoder. A phrase-based translation path is the sequence of phrase-pairs, whose source sides cover the source sentence and whose target sides generate the target sentence from left to right. If we look at the leaves of a Hiero derivation tree, the lexicals also form a segmentation of the source and target sentence, thus also form a discontinuous phrasebased translation path. As an example, let us look at the translation of the French sentence je ne parle pas le franc¸aise into English i don’t speak french in Figure 1. The Hiero decoder translates the sentence using a derivation of three rules: • r1 = X →parle ; speak. • r2 = X →ne X1 pas ; don′t X1. • r3 = X → Je X1 le Franc¸ais ; I X1 french. From this Hiero derivation, we have a segmentation of the sentence pairs into phrase pairs according to the word alignments, as shown on the left side of Figure 1. Ordering these phrase pairs according the word sequence on the target side, shown on the right side of Figure 1, we have a phrasebased translation path consisting of four phrase pairs: (je, i) , (ne . . . pas, not) , (parle, speak) , (lefrancaise, french) that has the same output as the Hiero system. Note that even though the Hiero decoder uses a composition of three rules, the corresponding phrase-based path consists of four phrase pairs. We name this new variant of the Hiero decoder, which uses phrase-based features, Phrasal-Hiero. Our Phrasal-Hiero addresses the shortcomming of the original Hiero system by incorporating phrase-based features. Let us revisit machine translation’s loglinear model combination of features in equation 1. We denote ph(a) as the corresponding phrase-based path of a Hiero derivation a, and MPh\H as the indexes of phrase-based features currently not applicable to the Hiero decoder. Our Phrasal-Hiero decoder seeks to find the translation, which optimizes: ⟨e∗, a∗⟩= argmax ⟨e,a⟩∈Htr(f)  X m∈MH λmhm (e, f, a) + + X m′∈MP h\H λm′hm′ (e, f, ph(a))  . We focus on improving the modelling of reordering within Hiero and include discriminative reordering features (Tillmann, 2004) and a distance cost feature, both of which are not modeled in the original Hiero system. Chiang et al. (2008) added structure distortion features into their decoder and showed improvements in their ChineseEnglish experiment. To our knowledge, PhrasalHiero is the first system, which directly integrates phrase-based and Hiero features into one model. 1588 Figure 1: Example of French-English Hiero Translation on the left and its corresponding discontinuous phrase-based translation on the right. Rules Alignments Phrase pairs & nonterminals r1 = X →parle ; speak. 0-0 (parle ; speak) r2 = X →ne X1 pas ; don′t X1. 0-0 1-1 2-0 (ne . . . pas ; don′t) ; X1 r3 = X →Je X1 le Francais ; I X1 French 0-0 1-1 3-2 (Je ; I) ; X1 ; (le Francais; french) r4 = X →je X1 le X2 ; i X1 X2 0-0 1-1 3-2 Not Applicable Table 1: Rules and their sequences of phrase pairs and nonterminals Previous work has attempted to weaken the context free assumption of the synchronous context free grammar formalism, for example using syntactic non-terminals (Zollmann and Venugopal, 2006). Our approach can be viewed as applying soft context constraint to make the probability of substituting a nonterminal by a subtree depending on the corresponding phrase-based reordering features. In the next section, we explain the model in detail. 2 Phrasal-Hiero Model Phrasal-Hiero maps a Hiero derivation into a discontinuous phrase-based translation path by the following two steps: 1. Training: Represent each rule as a sequence of phrase pairs and nonterminals. 2. Decoding: Use the rules’ sequences of phrase pairs and nonterminals to find the corresponding phrase-based path of a Hiero derivation and calculate its feature scores. 2.1 Map Rule to A Sequence of Phrase Pairs and Nonterminals We segment the rules’ lexical items into phrase pairs. These phrase pairs will be part of the phrasebased translation path in the decoding step. The rules’ nonterminals are also preserved in the sequence, during the decoding they will be substituted by other rules’ phrase pairs. We now explain how to map a rule to a sequence of phrase pairs and nonterminals. Let r = X → s0X1s1 . . . Xksk ; t0Xα(1)t1 . . . Xα(k)tk be a rule of k nonterminals, α(.) defines the sequence of nonterminals on the target. si or ti , i = 0 . . . k are phrases between nonterminals, they can be empty because nonterminals can be at the border of the rule or two nonterminals are adjacent. For example the rule X →ne X1 pas ; not X1 has k = 1, s0 = ne, s1 = pas, t0 = not, t1 is an empty phrase because the target X1 is at the rightmost position. Phrasal-Hiero retains both nonterminals and lexical alignments of Hiero rules instead of only nonterminal mappings as in (Chiang, 2005). A 1589 rule’s lexical alignment is the most frequent one in the training data. We use the lexical alignments of a rule to decide how source phrases and target phrases are connected. In the rule r, a source phrase si is connected to a target phrase ti′ if at least one word in si aligns to a target word in ti′. In the rule X →Je X1 le Franc¸ais ; I X1 french extract from sentence pair in Figure 1, the phrase le Franc¸ais connects to the phrase french because the French word Franc¸ais aligns with the English word french even though le is unaligned. We then group the source phrases and target phrases into phrase pairs such that only phrases that are connected to each other are in the same phrase pair. So phrase pairs still preserve the lexical dependency of the rule. Phrase pairs and nonterminals are then ordered according to the target side of the rule. Table 1 shows an example of rules, alignments and their sequences of phrase pairs and nonterminals on the last column. Figure 2: Alignment of a sentence pair. There are Hiero rules in which one of its source phrases or target phrases is not aligned. For example in the rule r4 = X →je X1 le X2 ; i X1 X2 extracted from the sentence pair in Figure 2, the phrase le is not aligned. In our Arabic-English experiment, rules without nonaligned phrases account for only 48.54% of the total rules. We compared the baseline Hiero translation from the full set of rules and the translation from only rules without nonaligned phrases. The later translation is faster and Table 2 1 shows that it outperforms the translation with the whole set of rules. We therefore decided to not use rules with nonaligned phrases in Phrasal-Hiero. It is important to note that there are different ways to use all the rules and map rules with unaligned phrases into a sequence of phrase pairs. 1The dataset and experiment setting description are in section 4. Test set MT04 MT05 MT09 All rules 48.17 47.85 42.37 Phrasal Hiero 48.52 47.78 42.8 Table 2: Arabic-English pilot experiment. Compare BLEU scores of translation using all extracted rules (the first row) and translation using only rules without nonaligned subphrases (the second row). For example, adding these unaligned phrases to the previous phrase pair i.e. the rule r4 has one discontinuous phrase pair (je . . . le, i) or treat these unaligned phrases as deletion/insertion phrases. We started the work with Arabic-English translation and decided not to use rules with nonaligned phrases in Phrasal-Hiero. In the experiment section, we will discuss the impact of removing rules with nonaligned sub-phrases in our GermanEnglish and Chinese-English experiments. 2.2 Training: Lexicalized Reordering Table Phrasal-Hiero needs a phrase-based lexicalized reordering table to calculate the features. The lexicalized reordering table could be from a discontinuous phrase-based system. To guarantee the lexicalized reordering table to cover all phrase pairs of the rule table, we extract phrase-pairs and their reordering directions during rule extraction. Let (s, t) be a sentence pair in the training data and r = X →s0X1s1 . . . Xksk ; t0X1t1 . . . Xktk be a rule extracted from the sentence. The lexical phrase pair corresponding to the rule r is ph = (s0 . . . s1 . . . sk, t0 . . . t1 . . . tk), with nonterminals are replaced by the gaps. Because the nonterminal could be at the border of the rule, the lexical phrase pair might have smaller coverage than the rule. For example, the training sentence pair in Figure 2 generates the rule r2 = X → ne X1 pas ; don′t X1 spanning (1 . . . 3, 1 . . . 2) but its lexical phrase pair (ne . . . pas, not) only spans (1 . . . 3, 1 . . . 1). Also, two different rules can have the same lexical phrase pairs. In Phrasal-Hiero, each lexical phrase pair is only generated once for a sentence. Look at the example of the training sentence pair in Figure 2, the rule X → je ; I spanning (0 . . . 1, 0 . . . 1) and the rule X → je X1 ; I X1 spanning (0 . . . 3, 0 . . . 2) are both sharing the same lexical phrase pair (je, i) spanning (0 . . . 1, 0 . . . 1). But Phrasal-Hiero only gen1590 erates (je, i) once for the sentence. Phrase pairs are generated together with phrase-based reordering orientations to build lexicalized reordering table. 3 Decoding Chiang (2007) applied bottom up chart parsing to parse the source sentence and project on the target side for the best translation. Each chart cell [X, i, j, r] indicates a subtree with rule r at the root covers the translation of the i-th word upto the j-th word of the source sentence. We extend the chart parsing, mapping the subtree to the equivalent discontinuous phrase-based path and includes phrasebased features to the log-linear model. In Phrasal-Hiero, each chart cell [X, i, j, r] also stores the first phrase pair and the last phrase pair of the phrase-based translation path covered the ith to the j-th word of the source sentence. These two phrase pairs are the back pointers to calculate reordering features of later larger spans. Because the distance cost feature and phrase-based discriminative reordering feature calculation are both only required the source coverage of two adjacent phrase pairs, we explain here the distance cost calculation. We will again use three rules r1, r2, r3 in Table 1 and the translation je ne parle pas le franc¸ais into I don’t speak French to present the technique. Table 3 shows the distance cost calculation. First, when the rule r has only terminals, the rule’s sequence of phrase pairs and nonterminals consists of only a phrase pair. No calculation is needed, the first phrase pair and the last phrase pair are the same. The chart cell X1 : 2 . . . 2 in Table 3 shows the translation with the rule r1 = X →parle ; speak. The first phrase pair and the last phrase pair point to the phrase (parle, speak) spanning 2 . . . 2 of the source sentence. When the translation rule’s right hand side has nonterminals, the nonterminals in the sequence belong to smaller chart cells that we already found phrase-based paths and calculated their features before. The decoder then substitute these paths into the rule’s sequence of phrase pairs and nonterminals to form the complete path for the current span. We now demonstrate finding the phrase based path and calculate distance cost of the chart cell X2 spanning 1 . . . 3. The next phrase pair of (ne . . . pas, don′t) is the first phrase pair of the chart cell X1 which is (parle, speak). The distance cost of these two phrase pairs according to discontinuous phrase-based model is |2 −3 −1| = 2. The distance cost of the whole chart cell X2 also includes the cost of the translation path covered by chart cell X1 which is 0, therefore the distance cost for X2 is 2 + dist(X1) = 2. We then update the first phrase pair and the last phrase pair of cell X2. The first phrase pair of X2 is (ne . . . pas, don′t), the last phrase pair is also the last phrase pair of cell X1 which is (parle, speak). Similarly, finding the phrase-based path and calculate its distortion features in the chart cell X3 include calculate the feature values for moving from the phrase pair (je, I) to the first phrase pair of chart cell X2 and also from last phrase pair of chart cell X2 to the phrase pair (le franc¸aise, french). 4 Experiment Results In all experiments we use phrase-orientation lexicalized reordering (Galley and Manning, 2008)2 which models monotone, swap, discontinuous orientations from both reordering with previous phrase pair and with the next phrase pair. There are total six features in lexicalized reordering model. We will report the impact of integrating phrasebased features into Hiero systems for three language pairs: Arabic-English, Chinese-English and German-English. 4.1 System Setup We are using the following three baselines: • Phrase-based without lexicalized reodering features. (PB+nolex) • Phrase-based with lexicalized reordering features.(PB+lex) • Hiero system with all rules extracted from training data. (Hiero) We use Moses phrase-based and chart decoder (Koehn et al., 2007) for the baselines. The score difference between PB+nolex and PB+lex results indicates the impact of lexicalized reordering features on phrase-based system. In Phrasal-Hiero we 2Galley and Manning (2008) introduce three orientation models for lexicalized reordering: word-based, phrase-based and hierarchical orientation model. We apply phrase-based orientation in all experiment using lexicalized reordering. 1591 Chart Cell Rule’s phrase pairs & NTs Distance First Phrase Pair Last Phrase Pair X1 : 2 . . . 2 (parle, speak) ∅ 2 . . . 2 (parle, speak) X2 : 1 . . . 3 (ne . . . pas, don′t) ; X1 2 + dist (X1) 1 . . . 3 2 . . . 2 (parle, speak) = 2 (ne . . . pas, don′t) X3 : 0 . . . 5 (Je ; I) ; X2 ; 0 + dist (X2) 0 . . . 0 (je, I) 4 . . . 5 (le Franc¸ais; french) +1 = 3 (le Franc¸ais; french) Table 3: Phrasal-Hiero Decoding Example: Calculate distance cost feature for the translation in Figure 1. will compare if these improvements still carry on into Hiero systems. The original Hiero system with all rules extracted from training data (Hiero) is the most relevant baseline. We will evaluate the difference between this Hiero baseline and our Phrasal-Hiero. To implement Phrasal-Hiero, we extented Moses chart decoder (Koehn et al., 2007) to include distance-based reordering as well as the lexicalized phrase orientation reordering model. We will report the following results for Phrasal-Hiero: • Hiero translation results on the subset of rules without unaligned phrases. (we denote this in the table scores as P.H.) • Phrasal-Hiero with phrase-based distance cost feature (P.H.+dist). • Phrasal-Hiero with phrase-based lexicalized reordering features(P.H.+lex). • Phrasal-Hiero with distance cost and lexicalized reordering features(P.H.+dist+lex). 4.2 Arabic-English Results The Arabic-English system was trained from 264K sentence pairs with true case English. The Arabic is in ATB morphology format. The language model is the interpolation of 5-gram language models built from news corpora of the NIST 2012 evaluation. We tuned the parameters on the MT06 NIST test set (1664 sentences) and report the BLEU scores on three unseen test sets: MT04 (1353 sentences), MT05 (1056 sentences) and MT09 (1313 sentences). All test sets have four references per each sentence. The results are in Table 4. The three rows in the first block are the baseline scores. Phrase-based with lexicalized reordering features(PB+lex) shows significant improvement on all test sets over the simple phrase-based system without lexicalized reordering (PB+nolex). On average the improvement is 1.07 BLEU score (45.66 MT04 MT05 MT09 Avg. PB+nolex 47.40 46.83 42.75 45.66 PB+lex 48.62 48.07 43.51 46.73 Hiero 48.17 47.85 42.37 46.13 P.H. 48.52 47.78 42.80 46.37 (48.54% rules) P.H.+dist 48.46 47.92 42.62 46.33 P.H. +lex 48.70 48.59 43.84 47.04 P.H +lex+dist 49.35 49.07 43.40 47.27 Improv. over 0.73 1.00 0.34 0.54 PB+lex Improv. over 0.83 1.29 1.04 0.90 P.H. Improv. over 1.18 1.22 1.47 1.14 Hiero Table 4: Arabic-English true case translation scores in BLEU metric. The three rows in the first block are the baseline scores. The next four rows in the second block are Phrasal-Hiero scores, the best scores are in boldface. The three rows in the last block are the Phrasal-Hiero improvements. versus 46.73). We make the same observation as Zollmann et al. (2008), i.e, that the Hiero baseline system underperforms compared to the phrasebased system with lexicalized phrase-based reordering for Arabic-English in all test sets, on average by about 0.60 BLEU points (46.13 versus 46.73). This is because Arabic language has relative free reordering, but mostly short distance, which is better captured by discriminative reordering features. The next four rows in the second block of Table 4 show Phrasal-Hiero results. The P.H. line is the result of Hiero experiment on only a subset of rules without nonaligned phrases. As mentioned in section 2.1, Phrasal-Hiero only uses 48.54% of the rules but achieves as good or even better performance (on average 0.24 BLEU points better) compared to the original Hiero system using the full set of rules. We do not benefit from adding only the 1592 distance-based reordering feature (P.H+dist) to the Arabic-English experiment but get significant improvements when adding the six features of the lexicalized reordering (P.H+lex). Table 4 shows that the P.H.+lex system gains on average 0.67 BLEU points (47.04 versus 46.37). Even though the baseline Hiero underperforms phrase-based system with lexicalized reordering(P.B+lex), the P.H.+lex system already outperforms P.B+lex in all test sets (on average 47.04 versus 46.73). Adding both distance cost and lexicalized reordering features (P.H.+dist+lex) performs the best. On average P.H.+dist+lex improves 0.90 BLEU points over P.H. without new phrase-based features and 1.14 BLEU score over the baseline Hiero system. Note that Hiero rules already have lexical context in the reordering, but adding phrase-based lexicalized reordering features to the system still gives us about as much improvement as the phrase-based system gets from lexicalized reordering features, here 1.07 BLEU points. And our best Phrasal-Hiero significantly improves over the best phrase-based baseline by 0.54 BLEU points. This shows that the underperformance of the Hiero system is due to its lack of lexicalized reordering features rather than a limited hypothesis space. 4.3 Chinese-English Results The Chinese-English system was trained on FBIS corpora of 384K sentence pairs, the English corpus is lower case. The language model is the trigram SRI language model built from Xinhua corpus of 180 millions words. We tuned the parameters on MT06 NIST test set of 1664 sentences and report the results of MT04, MT05 and MT08 unseen test sets. The results are in Table 5. We also make the same observation as Zollmann et al. (2008) on the baselines for ChineseEnglish translation. Even though the phrasebased system benefits from lexicalized reordering, PB+lex on average outperforms PB+nolex by 1.16 BLEU points (25.87 versus 27.03), it is the Hiero system that has the best baseline scores across all test sets, with and average of 27.70 BLEU points. Phrasal Hiero scores are given in the second block of Table 5. It uses 84.19% of the total training rules, but unlike the Arabic-English system, using a subset of the rules costs Phrasal-Hiero on all test sets and on average it loses 0.49 BLEU points (27.21 versus 27.70). Similar to Chiang MT04 MT05 MT08 Avg. PB+nolex 29.99 26.4 21.23 25.87 PB+lex 31.03 27.57 22.41 27.03 Hiero 32.49 28.06 22.57 27.70 P.H. 31.83 27.66 22.16 27.21 (84.19% rules) P.H.+dist 32.18 28.25 22.46 27.63 P.H.+lex 32.55 28.51 23.08 28.05 P.H+lex+dist 33.06 28.78 23.23 28.35 Improv. over 2.03 1.21 0.82 1.32 PB+lex Improv. over 1.23 1.12 1.07 1.14 P.H. Improv. over 0.57 0.72 0.66 0.65 Hiero Table 5: Chinese-English lower case translation scores in BLEU metric. et al. (2008) in their Chinese-English experiment, we benefit by adding the distance cost feature. PH.+dist outperforms P.H. on all test sets. We have better improvements when adding the six features of the lexicalized reordering model: P.H.+lex on average has 28.05 BLEU points, i.e. gains 0.84 over P.H.. The P.H.+lex system is even better than the best Hiero baseline using the whole set of rules. We again get the best translation when adding both the distance cost feature and the lexicalized reordering features. The P.H+dist+lex has the best score across all the test sets and on average gains 1.14 BLEU points over P.H. So adding phrasebased features to the Hiero system yields nearly the same improvement as adding lexicalized reordering features to the phrase-based system. This shows that a strong Chinese-English Hiero system still benefits from phrase-based features. Further more, the P.H+dist+lex also outperforms the Hiero baseline using all rules from training data. 4.4 German-English Results We next consider German-English translation. The systems were trained on 1.8 million sentence pairs using the Europarl corpora. The language model is three-gram SRILM trained from the target side of the training corpora. We use WMT 2010 (2489 sentences) as development set and report scores on WMT 2008 (2051 sentences), WMT 2009 (2525 sentences), WMT 2011 (3003 sentences). All test sets have one reference per test sentence. The results are in Table 6. 1593 WMT test 08 09 11 Avg. PB+nolex 17.46 17.38 16.76 17.20 PB+lex 18.16 17.85 17.18 17.73 Hiero 18.20 18.23 17.46 17.96 P.H. 18.24 18.15 17.39 17.92 (80.54% rules) P.H. +dist 18.19 17.97 17.41 17.85 P.H. +lex 18.59 18.46 17.69 18.24 P.H.+lex+dist 18.70 18.53 17.81 18.34 Improv. over 0.54 0.68 0.63 0.61 PB+lex Improv. over 0.46 0.38 0.42 0.42 P.H. Improv. over 0.50 0.30 0.35 0.38 Hiero Table 6: German-English lower case translation scores in BLEU metric. The Hiero baseline performs on average 0.26 BLEU points better than the phrase-based system with lexicalized reordering features (PB+lex). The hrasal-Hiero system used 80.54% of the total training rules, but on average the P.H. system has the same performance as the Hiero system using all the rules extracted from training data. Similar to the Arabic-English experiment, Phrasal-Hiero does not benefit from adding the distance cost feature. We do, however, see improvements on all test sets when adding lexicalized reordering features. On average the P.H.+lex results are 0.32 BLEU points higher than the P.H. results. The best scores are achieved with P.H+lex+dist. The German-English translations on average gain 0.38 BLEU score by adding both distance cost and discriminative reordering features. 4.5 Impact of segment rules into phrase pairs Phrasal Hiero is the first system using rules’ lexical alignments. If lexical alignments are not available, we can not divide the rules’ lexicals into phrase pairs without losing their dependancies. An alternative approach would be combining all lexicals of a rule into one phrase pair. We run an addition experiment for this approach on ArabicEnglish dataset. Table 7 shows the examples rules and its new sequence of nonterminals and phrase pairs. The rules r1 and r2 have the same sequences as in Table 1. Without segment rules into phrase pairs, the rule r3 has only one phrase pair: ph = (Je . . . le Francaise ; I . . . french) and ph is repeated twice in r3’s sequence of phrase pairs and nonterminals. The new experiment uses the complete set of rules so the rule r4 is included. According to the new sequence of phrase pairs and nonterminals, during decoding the rule r3 has discontinous translation directions on both from phrase pair ph to the nonterminal X1 and from X1 to ph. But using lexical alignment and divide the rule into phrase pairs as in section 2.1 , the sequence preserves the translation order of r3 as two monotone translations from (je; I) to X1 and from X1 to (le Francaise ; french). Avg Hiero 46.13 Hiero+lex 46.45 ( +0.32) (no lex. alignments) P.H 46.37 P.H.+lex 47.04 (+0.67) (with lex. alignments) Table 8: Average of Arabic-English translation scores in BLEU metric. Compare the improvement of using rules’ lexical alignments (2nd block) and not using rules’ lexical alignments (1st block). Table 8 compares the two experiments results. The additional experiment is denoted as Hiero+lex in the table. The first block shows an improvement of 0.32 BLEU score when adding discriminated reordering features on Hiero (using the whole set of rules and no rule segmentation). The second block is the impact of adding discriminated reordering features on Phrasal Hiero (using a subset of rules and segment rules into phrase pairs). Here the improvement of P.H+lex over P.H is 0.67 BLEU score. It shows the benefit of segment rules into phrase pairs. 4.6 Rules without unaligned phrases A-E C-E G-E Hiero 46.13 27.70 17.96 P.H. 46.36 27.21 17.92 %Rules used 48.54% 84.19% 80.54% P.H.+lex+dist 47.27 28.35 18.34 Table 9: The impact of using only rules without nonaligned phrases on Phrasal-Hiero results. Table 9 summarizes the impact of using only rules without nonaligned phrases on Phrasal1594 Rules Phrase pairs & nonterminals r1 = X →parle ; speak. (parle ; speak) r2 = X →ne X1 pas ; don′t X1. (ne . . . pas ; don′t) ; X1 r3 = X →Je X1 le Francais ; I X1 French (Je . . . le Francais ; I . . . french) ; X1 ; (Je . . . le Francais ; I . . . french) r4 = X →je X1 le X2 ; i X1 X2 (je . . . le ; i) ; X1 ; X2 Table 7: Example of translation rules and their sequences of phrase pairs and nonterminals when lexical alignments are not available. Hiero. Using only rules without nonaligned phrases can get the same performance with translation with full set of rules for Arabic-English and German-English experiments but underperforms for the Chinese-English system. We suggest the difference might come from the linguistic divergences of source and target languages. Phrasal Hiero includes all lexical rules (rules without nonterminal) therefore it still has the same lexical coverage as the original Hiero system. In the Arabic-English system, the Arabic is in ATB format, therefore most English words should have alignments in the ATB source, rules with nonaligned phrases could be the results of bad alignments or non-informative rules, therefore we could have better performance by using a subset of rules in Phrasal-Hiero. As Chinese and English are highly divergent, we expect many phrases in one language correctly unaligned in the other language. So leaving out the rules with nonaligned phrases could degrade the system. Even though the current Phrasal-Hiero with extra phrase-based features outperforms the Hiero baseline, future work for Phrasal-Hiero will focus on including all rules extracted from training corpora. 4.7 Discontinuous Phrase-Based C-E G-E PB+lex 27.03 17.73 PB+lex+gap 27.11 17.55 Hiero 27.70 17.96 P.H.+lex+dist 28.35 18.34 Table 10: Comparing Phrasal-Hiero with translation with gap for Chinese-English and GermanEnglish. The numbers are average BLEU scores of all test sets. We compare Phrasal-Hiero with a discontinuous phrase-based system introduced by Galley and Manning (2010) for Chinese-English and GermanEnglish system. Table 10 shows the average results. We used Phrasal decoder (Cer et al., 2010) for phrase-based with gaps (PB+lex+gap) results. While we do not focus on the differences in the toolkits, our Phrasal-Hiero still outperforms the phrase-based with gaps experiments. Conclusion We have presented a technique to combine phrasebased features and tree-based features into one model. Adding a distance cost feature, we only get better translation for Chinese-English translation. Phrasal-Hiero benefits from adding discriminative reodering features in all experiment. We achieved the best result when adding both distance cost and lexicalized reordering features. PhrasalHiero currently uses only a subset of rules from training data. A future work on the model can include complete rule sets together with word insertion/deletion features for nonaligned phrases. References A. Birch, P. Blunsom, and M. Osborne. 2009. A Quantitative Analysis of Reordering Phenomena. In StatMT ’09: Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 197–205. X. Carreras and M. Collins. 2009. Non-Projective Parsing for Statistical Machine Translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 Volume 1, EMNLP ’09, pages 200–209. D. Cer, M. Galley, D. Jurafsky, and C. Manning. 2010. Phrasal: A Statistical Machine Translation Toolkit for Exploring New Model Features. In Proceedings of the NAACL HLT 2010 Demonstration Session, pages 9–12. Association for Computational Linguistics, June. D. Chiang, Y. Marton, and P. Resnik. 2008. Online Large-Margin Training of Syntactic and Structural Translation Features. In Proceedings of the Conference on Empirical Methods in Natural Language 1595 Processing, pages 224–233. Association for Computational Linguistics. D. Chiang. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Proc. of ACL. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. M. Galley and C. Manning. 2008. A Simple and Effective Hierarchical Phrase Reordering Model. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 847– 855, Honolulu, Hawaii, October. M. Galley and C. D. Manning. 2010. Accurate NonHierarchical Phrase-Based Translation. In Proceedings of NAACL-HLT, pages 966–974. M. Huck, S. Peitz, M. Freitag, and H. Ney. 2012. Discriminative Reordering Extensions for Hierarchical Phrase-Based Machine Translation. In EAMT, pages 313–320. P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical Phrase-Based Translation. In Proc. of HLT-NAACL, pages 127–133. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL demonstration session. C. Tillmann. 2004. A Unigram Orientation Model for Statistical Machine Translation. In Proceedings of HLT-NAACL: Short Papers, pages 101–104. A. Zollmann and A. Venugopal. 2006. Syntax Augmented Machine Translation via Chart Parsing. In Proc. of NAACL 2006 - Workshop on Statistical Machine Translation. A. Zollmann, A. Venugopal, F. Och, and J. Ponte. 2008. A Systematic Comparison of Phrase-Based, Hierarchical and Syntax-Augmented Statistical MT. In Proceedings of the Conference on Computational Linguistics (COLING). 1596
2013
156
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1597–1607, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Machine Translation Detection from Monolingual Web-Text Yuki Arase Microsoft Research Asia No. 5 Danling St., Haidian Dist. Beijing, P.R. China [email protected] Ming Zhou Microsoft Research Asia No. 5 Danling St., Haidian Dist. Beijing, P.R. China [email protected] Abstract We propose a method for automatically detecting low-quality Web-text translated by statistical machine translation (SMT) systems. We focus on the phrase salad phenomenon that is observed in existing SMT results and propose a set of computationally inexpensive features to effectively detect such machine-translated sentences from a large-scale Web-mined text. Unlike previous approaches that require bilingual data, our method uses only monolingual text as input; therefore it is applicable for refining data produced by a variety of Web-mining activities. Evaluation results show that the proposed method achieves an accuracy of 95.8% for sentences and 80.6% for text in noisy Web pages. 1 Introduction The Web provides an extremely large volume of textual content on diverse topics and areas. Such data is beneficial for constructing a large scale monolingual (Microsoft Web N-gram Services, 2010; Google N-gram Corpus, 2006) and bilingual (Nie et al., 1999; Shi et al., 2006; Ishisaka et al., 2009; Jiang et al., 2009) corpus that can be used for training statistical models for NLP tools, as well as for building a large-scale knowledge-base (Suchanek et al., 2007; Zhu et al., 2009; Fader et al., 2011; Nakashole et al., 2012). With recent advances in statistical machine translation (SMT) systems and their wide adoption in Web services through APIs (Microsoft Translator, 2009; Google Translate, 2006), a large amount of text in Web pages is translated by SMT systems. According to Rarrick et al. (2011), their Web crawler finds that more than 15% of EnglishJapanese parallel documents are machine translation. Machine-translated sentences are useful if they are of sufficient quality and indistinguishable from human-generated sentences; however, the quality of these machine-translated sentences is generally much lower than sentences generated by native speakers and professional translators. Therefore, a method to detect and filter such SMT results is desired to best make use of Web-mined data. To solve this problem, we propose a method for automatically detecting Web-text translated by SMT systems1. We especially target machinetranslated text produced through the Web APIs that is rapidly increasing. We focus on the phrase salad phenomenon (Lopez, 2008), which characterizes translations by existing SMT systems, i.e., each phrase in a sentence is semantically and syntactically correct but becomes incorrect when combined with other phrases in the sentence. Based on this trait, we propose features for evaluating the likelihood of machine-translated sentences and use a classifier to determine whether the sentence is generated by the SMT systems. The primary contributions of the proposed method are threefold. First, unlike previous studies that use parallel text and bilingual features, such as (Rarrick et al., 2011), our method only requires monolingual text as input. Therefore, our method can be used in monolingual Web data mining where bilingual information is unavailable. Second, the proposed features are designed to be computationally light so that the method is suitable for handling a large-scale Web-mined data. Our method determines if an input sentence contains phrase salads using a simple yet effective features, i.e., language models (LMs) and automatically obtained non-contiguous phrases that are frequently used by people but difficult for SMT systems to generate. Third, our method computes features using both human-generated text and SMT 1In this paper, the term machine-translated is used for indicating translation by SMT systems. 1597 results to capture a phrase salad by contrasting these features, which significantly improves detection accuracy. We evaluate our method using Japanese and English datasets, including a human evaluation to assess its performance. The results show that our method achieves an accuracy of 95.8% for sentences and 80.6% for noisy Web-text. 2 Related Work Previous methods for detecting machinetranslated text are mostly designed for bilingual corpus construction. Antonova and Misyurev (2011) design a phrase-based decoder for detecting machine-translated documents in Russian-English Web data. By evaluating the BLEU score (Papineni et al., 2002) of translated documents (by their decoder) against the target-side documents, machine translation (MT) results are detected. Rarrick et al. (2011) extract a variety of features, such as the number of tokens and character types, from sentences in both the source and target languages to capture words that are mis-translated by MT systems. With these features, the likelihood of a bilingual sentence pair being machine-translated can be determined. Confidence estimation of MT results is also a related area. These studies aim to precisely replicate human judgment in terms of the quality of machine-translated sentences based on features extracted using a syntactic parser (CorstonOliver et al., 2001; Gamon et al., 2005; Avramidis et al., 2011) or essay scoring system (Parton et al., 2011), assuming that their input is always machine-translated. In contrast, our method aims at making a binary judgment to distinguish machine-translated sentences from a mixture of machine-translated and human-generated sentences. In addition, although methods for confidence estimation can assume sentences of a known source language and reference translations as inputs, these are unavailable in our problem setting. Another related area is automatic grammatical error detection for English as a second language (ESL) learners (Leacock et al., 2010). We use common features that are also used in this area. They target specific error types commonly made by ESL learners, such as errors in prepositions and subject-verb agreement. In contrast, our method does not specify error types and aims to detect machine-translated sentences focusing on the phrase salad phenomenon produced by SMT systems. In addition, errors generated by ESL learners and SMT systems are different. ESL learners make spelling and grammar mistakes at the word level but their sentence are generally structured while SMT results are unstructured due to phrase salads. Works on translationese detection (Baroni and Bernardini, 2005; Kurokawa et al., 2009; Ilisei et al., 2010) aim to automatically identify humantranslated text by professionals using text generated by native speakers. These are related, but our work focuses on machine-translated text. The closest to our approach is the method proposed by Moore and Lewis (2010). It automatically selects data for creating a domain-specific LM. Specifically, the method constructs LMs using corpora of target and non-target domains and computes a cross-entropy score of an input sentence for estimating the likelihood that the input sentence belongs to the target or non-target domains. While the context is different, our work uses a similar idea of data selection for the purpose of detecting low-quality sentences translated by SMT systems. 3 Proposed Method When APIs of SMT services are used for machinetranslating an Web page, they typically insert specific tags into the HTML source. Utilizing such tags makes MT detection trivial. However, the actual situation is more complicated in real Web data. When people manually copy and paste machine-translated sentences, such tags are lost. In addition, human-generated and machinetranslated sentences are often mixed together even in a single paragraph. To observe the distribution of machine-translated sentences in such difficult cases, we examine 3K sentences collected by our in-house Web crawler. Among them, excluding the pages with the tags of MT APIs, 6.7% of them are found to be clearly machine translation. Our goal is to automatically identify these sentences that cannot be simply detected by the tags, except when the sentences are of sufficient quality to be indistinguishable from human-generated sentences. 3.1 Phrase Salad Phenomenon Fig. 1 illustrates the phrase salad phenomenon that characterizes a sentence translated by an existing 1598 | Of surprise | was up | foreigners flocked | overseas | as well, | they publicized not only | Japan, | saw an article from the news. | Natural English: The news was broadcasted not only in Japan but also overseas, and it surprised foreigners who read the article. Unnatural phrase sequence Natural phrase | | Missing combinational word Figure 1: The phrase salad phenomenon in a sentence translated by an SMT system; each (segmented) phrase is correct and fluent, but dotted arcs show unnatural sequences of phrases and the boxed phrase shows an incomplete non-contiguous phrase. SMT system. Each phrase, a sequence of consecutive words, is fluent and grammatically correct; however, the fluency and grammar correctness are both poor in inter-phrases. In addition, a phrase salad becomes obvious by observing distant phrases. For example, the boxed phrase in Fig. 1 is a part of the non-contiguous phrase “not only ⋆but also2;” however, it lacks the latter part of the phrase (“but also”) that is also necessary for composing a meaning. Such non-contiguous phrases are difficult for most SMT systems to generate, since these phrases require insertion of subphrases in distant parts of the sentence. Based on the observation of these characteristics, we define features to capture a phrase salad by examining local and distant phrases. These features evaluate (1) fluency (Sec. 3.2), (2) grammaticality (Sec. 3.3), and (3) completeness of non-contiguous phrases in a sentence (Sec. 3.4). Furthermore, humans can distinguish machinetranslated text because they have prior knowledge of how a human-generated sentence would look like, which has been accumulated by observing a lot of examples through their life. This knowledge makes phrase-salads, e.g., missing objects and influent sequence of words, obvious for humans since they rarely appear on human-generated sentences. Based on this assumption, we extract these features using both human-generated and machine-translated text. Features extracted from human-generated text represent the similarity to human-generated text. Likewise, features extracted from machine-translated text depict the similarity to machine-translated text. By contrasting these feature weights, we can effectively capture phrase salads in the sentence. 3.2 Fluency Feature In a machine-translated sentence, fluency becomes poor among phrases where a phrase salad occurs. We capture this influency using two independent LM scores; fw,H and fw,MT . The former LM is 2We use the symbol ⋆to represent a gap in which any word or phrase can be placed. trained with human-generated sentences and the latter one is trained with machine-translated sentences. We input a sentence into both of the LMs and use the scores as the fluency features. 3.3 Grammaticality Feature In a sentence with phrase salads, its grammaticality is poor because tense and voice become inconsistent among phrases. We capture this using LMs trained with part-of-speech (POS) sequences of human-generated and machine-translated sentences, and the features of fpos,H and fpos,MT are respectively computed. In a similar manner with a word-based LM, such grammatical inconsistency among phrases is detectable when computing a POS LM score, since the score becomes worse when an N-gram covers inter-phrases where a phrase salad occurs. This approach achieves computational efficiency since it only requires a POS tagger. Since a phrase salad may occur among distant phrases of a sentence, it is also effective to evaluate combinations of phrases that cannot be covered by the span of N-gram. For this purpose, we make use of function words that sparsely appear in a sentence where their combinations are syntactically constrained. For example, the same preposition rarely appears many times in a humangenerated sentence, while it does in a machinetranslated sentence due to the phrase salad. Similar to the POS LM, we first analyze sentences generated by human or SMT by a POS tagger, extract sequences of function words, and finally train LMs with the sequences. We use these LMs to obtain scores that are used as features ffw,H and ffw,MT . 3.4 Gappy-Phrase Feature There are a lot of common non-contiguous phrases that consist of sub-phrases (contiguous word string) and gaps, which we refer to as gappyphrases (Bansal et al., 2011). We specifically use gappy-phrases of 2-tuple, i.e., phrases consisting of two sub-phrases and one gap in the middle. Let us take an English example “not only ⋆but 1599 Sequences World population not only grows , but grows old . A press release not only informs but also teases . Hazelnuts are not only for food , but also fuel . The coalition must not only listen but also act . Table 1: Example of a sequence database also.” When a sentence contains the phrase “not only,” the phrase “but also” is likely to appear in human-generated setences. Such a gappy-phrase is difficult for SMT systems to correctly generate and causes a phrase salad. Therefore, we define a feature to evaluate how likely a sentence contains gappy-phrases in a complete form without missing sub-phrases. This feature is effective to complement LMs that capture characteristics in N-grams. Sequential Pattern Mining It is costly to manually collect a lot of such gappy-phrases. Therefore, we regard the task as sequential pattern mining and apply PrefixSpan proposed by Pei et al. (2001), which is a widely used sequential pattern mining method3. Given a set of sequences and a user-specified min support ∈N threshold, the sequential pattern mining finds all frequent subsequences whose occurrence frequency is no less than min support. For example, given a sequence database like Table 1, the sequential pattern mining finds all frequent subsequences, e.g., “not only,” “not only ⋆ but also,” “not ⋆but ⋆,” and etc. To capture a phrase salad by contrasting appearance of gappy-phrases in human-generated and machine-translated text, we independently extract gappy-phrases from each of them using PrefixSpan. We then compute features fg,H and fg,MT using the obtained phrases. Observation of Extracted Gappy-Phrases Based on a preliminary experiment, we set the parameter min support of PrefixSpan to 100 for computational efficiency. We extract gappy-phrases (of 2-tuple) from our development dataset described in Sec. 4.1 that includes 254K human-generated and 134K machinetranslated sentences in Japanese, and 210K human-generated and 159K machine-translated sentences in English. Regarding the Japanese dataset, we obtain about 104K and 64K gappy-phrases from human3Due to the severe space limitation, readers are referred to that paper. generated and machine-translated sentences, respectively. According to our observation of the extracted phrases, 21K phrases commonly appear in human-generated and machine-translated sentences. Many of these common phrases are incomplete forms of gappy-phrases that lack semantic meaning to humans, such as “not only ⋆ the” and “not only ⋆and.” On the other hand, complete forms of gappy-phrases that preserve semantic meaning exclusively appear in phrases extracted from human-generated sentences. We also obtain about 74K and 42K phrases from humangenerated and machine-translated sentences in the English dataset (21K of them are common). Phrase Selection As a result of sequential pattern mining, we can gather a huge number of gappy-phrases from human-generated and machine-translated text, but as we described above, many of them are common. In addition, it is computationally expensive to use all of them. Therefore, our method selects useful phrases for detecting machine-translated sentences. Although there are several approaches for feature selection, e.g., (Sebastiani, 2002), we use a method that is suitable for handling a large number of feature candidates. Specifically, we evaluate gappy-phrases based on the information gain that measures the amount of information in bits obtained for class prediction when knowing the presence or absence of a phrase and the corresponding class distribution. This corresponds to measuring an expected reduction in entropy, i.e., uncertainty associated with a random factor. The information gain G ∈R for a gappy-phrase g is defined as G(g) .= H(C) −P(X1 g)H(C|X1 g) −P(X0 g)H(C|X0 g), where H(C) represents the entropy of the classification, C is a stochastic variable taking a class, Xg is a stochastic variable representing the presence (X1 g) or absence (X0 g) of the phrase g, P(Xg) represents the probability of presence or absence of the phrase g, and H(C|Xg) is the conditional entropy due to the phrase g. We use top-k phrases based on the information gain G. Specifically, we use the top 40% of phrases to compute the feature values. Table 2 shows examples of gappy-phrases extracted from human-generated and machinetranslated text in our development dataset and remain after feature selection. 1600 in the early ⋆period after ⋆after the known as ⋆to and also ⋆and Human more ⋆than MT and ⋆but the not only ⋆but also no ⋆not with ⋆as well as not ⋆not Table 2: Example of gappy-phrases extracted from humangenerated and machine-translated text; phrases preserving semantic meaning are extracted only from human-generated text. The gappy-phrases depend on each other, and the more phrases extracted from human-generated (machine-translated) text are found in a sentence, the more likely the sentence is human-generated (machine-translated). Therefore, we compute the feature as fc(s) = X i∈k wiδ(i, s), where wi is a weight of the i-th phrase, and δ(i, s) is a Kronecker’s delta function that takes 1 if the sentence s includes the i-th phrase and takes 0 otherwise. We may set the weight wi according to the importance of the phrase, such as the information gain. In this work, we set wi to 1 for simplicity. 3.5 Classification Table 3 summarizes the features employed in our method. In addition to the discussed features, we use the length of a sentence as a feature flen to avoid the bias of LM-based features that favor shorter sentences. The proposed method takes a monolingual sentence from Web data as input and computes a feature vector of f = (fw,H, . . . , flen) ∈R9. Each feature is finally normalized to have a zero-mean and unit variance distribution. In the feature space, a support vector machine (SVM) classifier (Vapnik, 1995) is used to determine the likelihoods of machine-translated and human-generated sentences. 4 Experiments We evaluate our method using both Japanese and English datasets from various aspects and investigate its characteristics. In this section, we describe our experiment settings. 4.1 Data Preparation For the purpose of evaluation, we use humangenerated and machine-translated sentences for Feature Notation Fluency fw,H, fw,MT Grammaticality fpos,H, fpos,MT ffw,H, ffw,MT Gappy-phrase fg,H, fg,MT Length flen Table 3: List of proposed features and their notations constructing LMs, extracting gappy-phrases, and training a classifier. These sentences should be ensured to be human-generated or machinetranslated, and the human-generated and machinetranslated sentences express the same content for fairness of evaluation to avoid effects due to vocabulary difference. As a dataset that meets these requirements, we use parallel text in public websites (this is for fair evaluation and our method can be trained using nonparallel text on an actual deployment). Eight popular sites having Japanese and English parallel pages are crawled, whose text is manually verified to be human-generated. The main textual content of these 131K parallel pages are extracted, and the sentences are aligned using (Ma, 2006). As illustrated in Fig. 2, the text in one language is fed to the Bing translator, Google Translate, and an in-house SMT system4 implemented based on (Chiang, 2005) by ourselves for obtaining sentences translated by SMT systems. Due to a severe limitation on the number of requests to the APIs, we randomly subsample sentences before sending them to these SMT systems. We use text in the other language as human-generated sentences5. In this manner, we prepare 508K humangenerated and 268K machine-translated sentences as a Japanese dataset, and 420K human-generated and 318K machine-translated sentences as an English dataset. We split each of them into two even datasets and use one for development and the other for evaluation. 4.2 Experiment Setting For the fluency and grammaticality features, we train 4-gram LMs using the development dataset with the SRI toolkit (Stolcke, 2002). To obtain the POS information, we use Mecab (Kudo et al., 2004) for Japanese and a POS tagger developed by Toutanova et al. (2003) for English. We evaluate 4A preliminary evaluation of the in-house SMT system shows that it has comparable quality with Bing translator. 5These are a mixture of sentences generated by native speakers and professional translators/editors. 1601 Parallel sentences MT systems Machine translated sentences Humangenerated sentences Humangenerated sentences Humangenerated sentences … English Japanese Japanese Japanese … … … Figure 2: Experimental data preparation; text in one language is fed to SMT systems and the other is used as humangenerated sentences. the effect of the sizes of N-grams and development dataset in the experiments. Using the proposed features, we train an SVM classifier for detecting machine-translated sentences. We use an implementation of LIBSVM (Chang and Lin, 2011) with a radial basis function kernel due to the relatively small number of features in the proposed method. We set appropriate parameters by grid search in a preliminary experiment. We evaluate the performance of MT detection based on accuracy6 that is a broadly used evaluation metric for classification problems: accuracy = nTP + nTN n , where nTP and nTN are the numbers of truepositives and true-negatives, respectively, and n is the total number of exemplars. The accuracy scores that we report in Sec. 5 are all based on 10fold cross validation. 4.3 Comparison Method We compare our method with the method of (Moore and Lewis, 2010) (Cross-Entropy). Although the Cross-Entropy method is designed for the task of domain adaptation of an LM, our problem is a variant of their original problem and thus their method is directly relevant. In our context, the method computes the cross-entropy scores IMT (s) and IH(s) of an input sentence s against LMs trained on machine-translated and human-generated sentences. Cross-entropy and perplexity are monotonically related, as perplexity of s according to an LM M is simply ob6Although we also examine precision and recall of classification results, they are similar to accuracy reported in this paper. Method Accuracy Cross-Entropy 90.7 Lexical Feature 87.8 Proposed feature Word LMs 94.1 POS LMs 91.3 FW LMs 82.7 GPs 85.7 Table 4: Accuracy (%) of individual features and comparison methods tained by bIM(s) where IM(s) is cross-entropy score and b is a base with regard to which the cross-entropy is measured. The method scores the sentence according to the cross-entropy difference, i.e., IMT (s) −IH(s), and decides that the sentence is machine-translated when the score is lower than a predefined threshold. The classification is performed by 10-fold cross validation. We find the best performing threshold on a training set and evaluate the accuracy with a test set using the determined threshold. Additionally, we compare our method to a method that uses a feature indicating presence or absence of unigrams, which we call Lexical Feature. This feature is commonly used for translationese detection and shows the best performance as a single feature in (Baroni and Bernardini, 2005). It is also used by Rarrick et al. (2011) and shows the best performance by itself in detecting machine-translated sentences in English-Japanese translation in the setting of bilingual input. We implement the feature and use it against a monolingual input to fit our problem setting. 5 Results and Discussions In this section, we analyze and discuss the experiment results in detail. 5.1 Accuracy on Japanese Dataset We evaluate the sentence-level and documentlevel accuracy of our method using the Japanese dataset. Specifically, we evaluate effects of individual features and their combinations, compare with human annotations, and assess performance variations across different sentence lengths and various settings on LM training. Effect of Individual Feature Table 4 shows the accuracy scores of individual features and comparison methods. We refer to features for fluency (fw,H, fw,MT ) as Word LMs, grammaticality using POS LMs (fpos,H, fpos,MT ) as POS LMs 1602 Method Accuracy Word LMs + GPs 94.7 Word LMs + POS LMs 95.1 Word LMs + POS LMs + GPs 95.4 Word LMs + POS LMs + FW LMs 95.5 All 95.8 Table 5: Accuracy (%) of feature combinations; there are significant differences (p ≪.01) against the accuracy score of Word LMs. and function word LMs (ffw,H, ffw,MT ) as FW LMs, respectively, and for completeness of gappyphrases (fg,H, fg,MT ) as GPs. The Word LMs show the best accuracy that outperforms CrossEntropy by 3.4% and Lexical Feature by 6.3%. This high accuracy is achieved by contrasting fluency in human-generated and machine-translated text to capture the phrase salad phenomenon. The accuracy of Word LM trained only on humangenerated sentences is limited to 65.5%. On the other hand, the accuracy of Word LM trained on machine-translated sentences shows a better performance (84.4%). By combining these into a single feature vector f = (fw,H, fw,MT , flen), the accuracy is largely improved. It is interesting that Lexical Feature achieves a high accuracy of 87.8% despite its simplicity. Since Lexical Feature is a bag-of-words model, it can consider distant words in a sentence. This is effective for capturing a phrase salad that occurs among distant phrases, which N-gram cannot cover. As for Cross-Entropy, a simple subtraction of cross-entropy scores cannot well contrast the fluency in human-generated and machinetranslated text and results in poorer accuracy than Word LMs. The accuracy of POS LMs (91.3%) is slightly lower than that of Word LMs due to the limited vocabulary, i.e., the number of POSs. The accuracy of FW LMs and GPs are even lower. This is convincing since these features cannot have reasonable values when a sentence does not include a function word and gappy-phrase. However, these features are complementary to Word LMs as we will see in the next paragraph. Effect of Feature Combination Table 5 shows the accuracy when combining features. Sign tests show that the accuracy scores of these feature combinations are significantly different (p ≪.01) against the accuracy of Word LMs. The results show that the features complement each other. The Error Ratio Accuracy (%) Word LMs All Has wrong content words 37.8 93.1 95.0 Misses content words 12.2 91.8 96.5 Has wrong function words 19.7 92.7 97.1 Misses function words 13.0 93.3 95.6 Has wrong inflections 10.8 97.3 98.7 Table 6: Distribution (%) of machine translation errors and accuracy (%) of proposed method on the different errors combination of all features reaches an accuracy of 95.8%, which improves the accuracy of Word LMs by 1.7%. This result supports that FW LMs and GPs are effective to capture a phrase salad occurring in distant phrases and complement the evidence in N-grams that is captured by LMs. This effect becomes more obvious in the human evaluation. We also evaluate the accuracy of the proposed method at a document level. Due to the high accuracy at the sentence-level, we use a voting method to judge a document, i.e., deciding if the document is machine-translated when γ% of its sentences are judged as machine-translated. We use all features and find that our method achieves 99% precision and recall with γ = 50. Human Evaluation To further investigate the characteristics of our method, we conduct a human evaluation. We sample Japanese sentences and ask three native speakers to 1) judge whether a sentence is human-generated or machine-translated and 2) list errors that the sentence contains. Regarding the task 1), we allow the annotators to assign “hard to determine” for difficult cases. We allocate about 230 sentences for each annotator (in total 700 sentences) without overlapping annotation sets. The accuracy of annotations is found to be 88.2%, which shows that our method is even superior to native speakers. Agreement between the annotators and our method (with all features) is 85.1%. As we interview the annotators, we find that human annotations are strongly affected by the annotators’ domain knowledge. For example, technical sentences are more often misclassified by the annotators. Table 6 shows the distribution of errors on machine-translated sentences found by the annotators (on sentences that they correctly classified) with the accuracy of Word LMs and all features on 1603 0 2 4 6 8 10 70 75 80 85 90 95 100 6 10 14 18 22 26 30 34 38 42 46 50 54 58 62 66 70 74 78 Ratio(%) Accuracy (%) Num. of words in a sentence Proposed Method Cross-Entropy Lexical Feature Human Length distribution Figure 3: Accuracy (%) across different sentence lengths (the primary axis) and distribution (%) of sentence lengths in the evaluation dataset (the secondly axis) these sentences (a sentence may contain multiple errors). It indicates that the accuracy of Word LMs is improved by feature combination; from 1.4% on sentences of “Has wrong inflections” to 4.7% on sentences of “Misses content words”. Effect of Sentence Length The accuracy of the proposed method is significantly affected by sentence length (the number of words in a sentence). Fig. 3 shows the accuracy of the proposed method (with all features) and comparison methods w.r.t. sentence lengths (with the primary axis), as well as the distribution of sentence lengths in the evaluation dataset (with the secondly axis). We aggregate the classification results on each crossvalidation (test results). It also shows the accuracy of human annotations w.r.t. sentence lengths, which we obtain for the 700 sentences in the human evaluation. The accuracy drops on all methods when sentences are short; the accuracy of our method is 91.6% when a sentence contains less than or equal to 10 words. The proposed method shows the similar trend with the human annotations, and even the accuracy of human annotations significantly drops on such short sentences. This result indicates that SMT results on short sentences tend to be of sufficient quality and indistinguishable from human-generated sentences. Since such high-quality machine-translations do not harm the quality of Web-mined data, we do not need to detect them. Effect of Setting on LM Training We evaluate the performance variation w.r.t. the sizes of N-grams and development dataset. Fig. 4 shows the accuracy of the LM based features and feature combination when changing sizes of N-grams. The performance of Word LMs is stabilized after 78 83 88 93 98 1 2 3 4 Accuracy (%) N-gram Word LMs POS LMs FW LMs ALL Figure 4: Effect of the sizes of N-grams on MT detection accuracy (%) 3-gram while that of POS LMs is still improved at 4-gram. This is because POS LMs need more evidence to compensate for their limited vocabulary. FW LMs become stable at 3-gram because the possible number of function words in a sentence should be small. When we change the size of the development dataset with 10% increments, the accuracy curve is stabilized when the size is 40% of all set. Considering the fact that the overall development dataset is small, it shows that our method is deployable with a small dataset. 5.2 Accuracy on English Dataset To investigate the applicability of our method to other languages, we apply the same method to the English dataset. Because English is a configurational language, function words are less flexible than case markers in Japanese. Therefore, SMT systems may better handle English function words, which potentially decreases the effect of FW LMs in our method. In addition, because English is a morphologically poor language, the effect of POS LMs may be reduced. Nevertheless, in our experiment, all features are shown to be effective even with the English dataset. The combination of all features achieves the best performance, with an accuracy of 93.1%, which outperforms Cross-Entropy by 1.9%, and Lexical Feature by 8.5%. Even though improvements by POS LMs and FW LMs are smaller than Japanese case, their effects are still positive. We also find that GPs stably contribute to the accuracy. These results show the applicability of our method to other languages. 5.3 Accuracy on Raw Web Pages To avoid unmodeled factors affecting the evaluation, we have carefully removed noise from our experiment datasets. However, real Web pages are 1604 more complex; there are often instances of sentence fragments, such as captions and navigational link text. To evaluate the accuracy of our method on real Web pages, we conduct experiments using the dataset generated by Rarrick et al. (2011) that contains randomly crawled Web pages annotated by two annotators to judge if a page is humangenerated or machine-translated. We use Japanese sentences extracted from 69 pages (43 humangenerated and 26 machine-translated pages) where the annotators’ judgments agree; 3, 312 sentences consisting of 1, 399 machine-translated and 1, 913 human-generated sentences. To replicate the situation in real Web pages, we conduct a minimal preprocessing, i.e., simply removing HTML tags, and then feed all the remaining text to our method. An SVM classifier is trained with features obtained by the LMs and gappy-phrases computed from the data described in Sec. 4.1. Our method shows 80.6% accuracy at a sentence level and 82.4% accuracy at a document level using the voting method. One factor for this performance difference is again sentence lengths, as SMT results of short phrases in Web pages can be of highquality. Another factor is the noise in Web pages. We find that experimental pages contain lots of non-sentences, such as fragments of scripts and product codes. The results show that we need a preprocessing to remove typical noise in Web text before SMT detection to handle noisy Web pages. 5.4 Quality of Cleaned Data Finally, we briefly demonstrate the effect of machine-translation filtering in an end-to-end scenario, taking LM construction as an example. We construct LMs reusing the Japanese evaluation dataset described in Sec. 4.1 where machinetranslated sentences are removed by the proposed method (LM-Proposed), Lexical Feature (LM-LF), and Cross-Entropy (LM-CE), as well as an LM with all sentences, i.e., with machinetranslated sentences (LM-All). As a result of 5fold cross-validation, LM-Proposed has 17.8%, 17.1%, and 16.3% lower perplexities on average compared to LM-All, LM-LF, and LM-CE, respectively. These results show that our method is useful for improving the quality of Web-mined data. 6 Conclusion We propose a method for detecting machinetranslated sentences from monolingual Web-text focusing on the phrase salad phenomenon produced by existing SMT systems. The experimental results show that our method achieves an accuracy of 95.8% for sentences and 80.6% for noisy Web text. We plan to extend our method to detect machine-translated sentences produced by different MT systems, e.g., a rule-based system, and develop a unified framework for cleaning various types of noise in Web-mined data. In addition, we will investigate the effect of source and target languages on translation in terms of MT detection. As Lopez (2008) describes, a phrase-salad is a common phenomenon that characterizes current SMT results. Therefore, we expect that our method is basically effective on different language pairs. We will conduct experiments to evaluate performance difference using various language pairs. Acknowledgments We sincerely appreciate Spencer Rarrick and Will Lewis for active discussion and sharing the experimental data with us. We thank Junichi Tsujii for his valuable feedback to improve our work. References Alexandra Antonova and Alexey Misyurev. 2011. Building a web-based parallel corpus and filtering out machine translated text. In Proceedings of the Workshop on Building and Using Comparable Corpora, pages 136–144. Eleftherios Avramidis, Maja Popovic, David Vilar Torres, and Aljoscha Burchardt. 2011. Evaluate with confidence estimation: Machine ranking of translation outputs using grammatical features. In Proceedings of the Workshop on Statistical Machine Translation (WMT 2011), pages 65–70. Mohit Bansal, Chris Quirk, and Robert C. Moore. 2011. Gappy phrasal alignment by agreement. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 1308– 1317. Marco Baroni and Silvia Bernardini. 2005. A new approach to the study of translationese: Machinelearning the difference between original and translated text. Literary and Linguistic Computing, 21(3):259–274. 1605 Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):27:1–27:27. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2005), pages 263–270. Simon Corston-Oliver, Michael Gamon, and Chris Brockett. 2001. A machine learning approach to the automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2001), pages 148–155. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1535–1545. Michael Gamon, Anthony Aue, and Martine Smets. 2005. Sentence-level MT evaluation without reference translations: Beyond language modeling. In Proceedings of European Association for Machine Translation (EAMT 2005). Google N-gram Corpus. 2006. http://www.ldc. upenn.edu/Catalog/CatalogEntry. jsp?catalogId=LDC2006T13. Google Translate. 2006. http://code.google. com/apis/language/. Iustina Ilisei, Diana Inkpen, Gloria Corpas Pastor, and Ruslan Mitkov. 2010. Identification of translationese: A machine learning approach. In Proceedings of the International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2010), pages 503–511. Tatsuya Ishisaka, Masao Utiyama, Eiichiro Sumita, and Kazuhide Yamamoto. 2009. Development of a Japanese-English software manual parallel corpus. In Proceedings of the Machine Translation Summit (MT Summit XII). Long Jiang, Shiquan Yang, Ming Zhou, Xiaohua Liu, and Qingsheng Zhu. 2009. Mining bilingual data from the web with adaptively learnt patterns. In Proceedings of the Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2009), pages 870–878. Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to Japanese morphological analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2004), pages 230– 237. David Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic detection of translated text and its impact on machine translation. In Proceedings of the Machine Translation Summit (MT-Summit XII). Claudia Leacock, Martin Chodorow, Michael Gamon, and Joel Tetreault. 2010. Automated Grammatical Error Detection for Language Learners. Morgan and Claypool Publishers. Adam Lopez. 2008. Statistical machine translation. ACM Computing Surveys, 40(3):1–49. Xiaoyi Ma. 2006. Champollion: a robust parallel text sentence aligner. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2006), pages 489–492. Microsoft Translator. 2009. http://www. microsofttranslator.com/dev/. Microsoft Web N-gram Services. 2010. http:// research.microsoft.com/web-ngram. Robert Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 220–224. Ndapandula Nakashole, Gerhard Weikum, and Fabian M. Suchanek. 2012. PATTY: A taxonomy of relational patterns with semantic types. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012), pages 1135–1145. Jian-Yun Nie, Michel Simard, Pierre Isabelle, and Richard Durand. 1999. Cross-language information retrieval based on parallel texts and automatic mining of parallel texts from the web. In Proceedings of the Annual International ACM SIGIR Conference (SIGIR 1999), pages 74–81. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2002), pages 311–318. Kristen Parton, Joel Tetreault, Nitin Madnani, and Martin Chodorow. 2011. E-rating machine translation. In Proceedings of the Workshop on Statistical Machine Translation (WMT 2011), pages 108–115. Jian Pei, Jiawei Han, Behzad Mortazavi-Asl, Helen Pinto, Qiming Chen, Umeshwar Dayal, and MeiChun Hsu. 2001. PrefixSpan: Mining sequential patterns efficiently by prefix-projected pattern growth. In Proceedings of the International Conference on Data Engineering (ICDE 2001), pages 215–224. 1606 Spencer Rarrick, Chris Quirk, and Will Lewis. 2011. MT detection in web-scraped parallel corpora. In Proceedings of the Machine Translation Summit (MT Summit XIII). Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47. Lei Shi, Cheng Niu, Ming Zhou, and Jianfeng Gao. 2006. A DOM tree alignment model for mining parallel data from the web. In Proceedings of the International Conference on Computational Linguistics and the Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006), pages 489–496. Andreas Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing (ICSLP 2002), pages 901–904. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of International Conference on World Wide Web (WWW 2007), pages 697–706. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (HLTNAACL 2003), pages 252–259. Vladimir N. Vapnik. 1995. The nature of statistical learning theory. Springer. Jun Zhu, Zaiqing Nie, Xiaojiang Liu, Bo Zhang, and Ji-Rong Wen. 2009. StatSnowball: a statistical approach to extracting entity relationships. In Proceedings of International Conference on World Wide Web (WWW 2009), pages 101–110. 1607
2013
157
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1608–1618, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Paraphrase-Driven Learning for Open Question Answering Anthony Fader Luke Zettlemoyer Oren Etzioni Computer Science & Engineering University of Washington Seattle, WA 98195 {afader, lsz, etzioni}@cs.washington.edu Abstract We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision. 1 Introduction Open-domain question answering (QA) is a longstanding, unsolved problem. The central challenge is to automate every step of QA system construction, including gathering large databases and answering questions against these databases. While there has been significant work on large-scale information extraction (IE) from unstructured text (Banko et al., 2007; Hoffmann et al., 2010; Riedel et al., 2010), the problem of answering questions with the noisy knowledge bases that IE systems produce has received less attention. In this paper, we present an approach for learning to map questions to formal queries over a large, open-domain database of extracted facts (Fader et al., 2011). Our system learns from a large, noisy, questionparaphrase corpus, where question clusters have a common but unknown query, and can span a diverse set of topics. Table 1 shows example paraphrase clusters for a set of factual questions. Such data provides strong signal for learning about lexical variation, but there are a number Who wrote the Winnie the Pooh books? Who is the author of winnie the pooh? What was the name of the authur of winnie the pooh? Who wrote the series of books for Winnie the poo? Who wrote the children’s storybook ‘Winnie the Pooh’? Who is poohs creator? What relieves a hangover? What is the best cure for a hangover? The best way to recover from a hangover? Best remedy for a hangover? What takes away a hangover? How do you lose a hangover? What helps hangover symptoms? What are social networking sites used for? Why do people use social networking sites worldwide? Advantages of using social network sites? Why do people use social networks a lot? Why do people communicate on social networking sites? What are the pros and cons of social networking sites? How do you say Santa Claus in Sweden? Say santa clause in sweden? How do you say santa clause in swedish? How do they say santa in Sweden? In Sweden what is santa called? Who is sweden santa? Table 1: Examples of paraphrase clusters from the WikiAnswers corpus. Within each cluster, there is a wide range of syntactic and lexical variations. of challenges. Given that the data is communityauthored, it will inevitably be incomplete, contain incorrectly tagged paraphrases, non-factual questions, and other sources of noise. Our core contribution is a new learning approach that scalably sifts through this paraphrase noise, learning to answer a broad class of factual questions. We focus on answering open-domain questions that can be answered with single-relation queries, e.g. all of the paraphrases of “Who wrote Winnie the Pooh?” and “What cures a hangover?” in Table 1. The algorithm answers such questions by mapping them to executable queries over a tuple store containing relations such as authored(milne, winnie-the-pooh) and treat(bloody-mary, hangover-symptoms). 1608 The approach automatically induces lexical structures, which are combined to build queries for unseen questions. It learns lexical equivalences for relations (e.g., wrote, authored, and creator), entities (e.g., Winnie the Pooh or Pooh Bear), and question templates (e.g., Who r the e books? and Who is the r of e?). Crucially, the approach does not require any explicit labeling of the questions in our paraphrase corpus. Instead, we use 16 seed question templates and string-matching to find high-quality queries for a small subset of the questions. The algorithm uses learned word alignments to aggressively generalize the seeds, producing a large set of possible lexical equivalences. We then learn a linear ranking model to filter the learned lexical equivalences, keeping only those that are likely to answer questions well in practice. Experimental results on 18 million paraphrase pairs gathered from WikiAnswers1 demonstrate the effectiveness of the overall approach. We performed an end-to-end evaluation against a database of 15 million facts automatically extracted from general web text (Fader et al., 2011). On known-answerable questions, the approach achieved 42% recall, with 77% precision, more than quadrupling the recall over a baseline system. In sum, we make the following contributions: • We introduce PARALEX, an end-to-end opendomain question answering system. • We describe scalable learning algorithms that induce general question templates and lexical variants of entities and relations. These algorithms require no manual annotation and can be applied to large, noisy databases of relational triples. • We evaluate PARALEX on the end-task of answering questions from WikiAnswers using a database of web extractions, and show that it outperforms baseline systems. • We release our learned lexicon and question-paraphrase dataset to the research community, available at http://openie.cs.washington.edu. 2 Related Work Our work builds upon two major threads of research in natural language processing: information extraction (IE), and natural language interfaces to databases (NLIDB). 1http://wiki.answers.com/ Research in IE has been moving towards the goal of extracting facts from large text corpora, across many domains, with minimal supervision (Mintz et al., 2009; Hoffmann et al., 2010; Riedel et al., 2010; Hoffmann et al., 2011; Banko et al., 2007; Yao et al., 2012). While much progress has been made in converting text into structured knowledge, there has been little work on answering natural language questions over these databases. There has been some work on QA over web text (Kwok et al., 2001; Brill et al., 2002), but these systems do not operate over extracted relational data. The NLIDB problem has been studied for decades (Grosz et al., 1987; Katz, 1997). More recently, researchers have created systems that use machine learning techniques to automatically construct question answering systems from data (Zelle and Mooney, 1996; Popescu et al., 2004; Zettlemoyer and Collins, 2005; Clarke et al., 2010; Liang et al., 2011). These systems have the ability to handle questions with complex semantics on small domain-specific databases like GeoQuery (Tang and Mooney, 2001) or subsets of Freebase (Cai and Yates, 2013), but have yet to scale to the task of general, open-domain question answering. In contrast, our system answers questions with more limited semantics, but does so at a very large scale in an open-domain manner. Some work has been made towards more general databases like DBpedia (Yahya et al., 2012; Unger et al., 2012), but these systems rely on hand-written templates for question interpretation. The learning algorithms presented in this paper are similar to algorithms used for paraphrase extraction from sentence-aligned corpora (Barzilay and McKeown, 2001; Barzilay and Lee, 2003; Quirk et al., 2004; Bannard and Callison-Burch, 2005; Callison-Burch, 2008; Marton et al., 2009). However, we use a paraphrase corpus for extracting lexical items relating natural language patterns to database concepts, as opposed to relationships between pairs of natural language utterances. 3 Overview of the Approach In this section, we give a high-level overview of the rest of the paper. Problem Our goal is to learn a function that will map a natural language question x to a query z over a database D. The database D is a collection of assertions in the form r(e1, e2) where r is a bi1609 nary relation from a vocabulary R, and e1 and e2 are entities from a vocabulary E. We assume that the elements of R and E are human-interpretable strings like population or new-york. In our experiments, R and E contain millions of entries representing ambiguous and overlapping concepts. The database is equipped with a simple interface that accepts queries in the form r(?, e2) or r(e1, ?). When executed, these queries return all entities e that satisfy the given relationship. Thus, our task is to find the query z that best captures the semantics of the question x. Model The question answering model includes a lexicon and a linear ranking function. The lexicon L associates natural language patterns to database concepts, thereby defining the space of queries that can be derived from the input question (see Table 2). Lexical entries can pair strings with database entities (nyc and new-york), strings with database relations (big and population), or question patterns with templated database queries (how r is e? and r(?,e)). We describe this model in more detail in Section 4. Learning The learning algorithm induces a lexicon L and estimates the parameters θ of the linear ranking function. We learn L by bootstrapping from an initial seed lexicon L0 over a corpus of question paraphrases C = {(x, x′) : x′ is a paraphrase of x}, like the examples in Table 1. We estimate θ by using the initial lexicon to automatically label queries in the paraphrase corpus, as described in Section 5.2. The final result is a scalable learning algorithm that requires no manual annotation of questions. Evaluation In Section 8, we evaluate our system against various baselines on the end-task of question answering against a large database of facts extracted from the web. We use held-out knownanswerable questions from WikiAnswers as a test set. 4 Question Answering Model To answer questions, we must find the best query for a given natural language question. 4.1 Lexicon and Derivations To define the space of possible queries, PARALEX uses a lexicon L that encodes mappings from natural language to database concepts (entities, relations, and queries). Each entry in L is a pair (p, d) Entry Type NL Pattern DB Concept Entity nyc new-york Relation big population Question (1-Arg.) how big is e population(?, e) Question (2-Arg.) how r is e r(?, e) Table 2: Example lexical entries. where p is a pattern and d is an associated database concept. Table 2 gives examples of the entry types in L: entity, relation, and question patterns. Entity patterns match a contiguous string of words and are associated with some database entity e ∈E. Relation patterns match a contiguous string of words and are associated with a relation r ∈R and an argument ordering (e.g. the string child could be modeled as either parent-of or child-of with opposite argument ordering). Question patterns match an entire question string, with gaps that recursively match an entity or relation patterns. Question patterns are associated with a templated database query, where the values of the variables are determined by the matched entity and relation patterns. A question pattern may be 1-Argument, with a variable for an entity pattern, or 2-Argument, with variables for an entity pattern and a relation pattern. A 2argument question pattern may also invert the argument order of the matched relation pattern, e.g. who r e? may have the opposite argument order of who did e r? The lexicon is used to generate a derivation y from an input question x to a database query z. For example, the entries in Table 2 can be used to make the following derivation from the question How big is nyc? to the query population(?, new-york): This derivation proceeds in two steps: first matching a question form like How r is e? and then mapping big to population and nyc to new-york. Factoring the derivation this way allows the lexical entries for big and nyc to be reused in semanti1610 cally equivalent variants like nyc how big is it? or approximately how big is nyc? This factorization helps the system generalize to novel questions that do not appear in the training set. We model a derivation as a set of (pi, di) pairs, where each pi matches a substring of x, the substrings cover all words in x, and the database concepts di compose to form z. Derivations are rooted at either a 1-argument or 2-argument question entry and have entity or relation entries as leaves. 4.2 Linear Ranking Function In general, multiple queries may be derived from a single input question x using a lexicon L. Many of these derivations may be incorrect due to noise in L. Given a question x, we consider all derivations y and score them with θ · φ(x, y), where φ(x, y) is a n-dimensional feature representation and θ is a n-dimensional parameter vector. Let GEN(x; L) be the set of all derivations y that can be generated from x using L. The best derivation y∗(x) according to the model (θ, L) is given by: y∗(x) = arg max y∈GEN(x;L) θ · φ(x, y) The best query z∗(x) can be computed directly from the derivation y∗(x). Computing the set GEN(x; L) involves finding all 1-Argument and 2-Argument question patterns that match x, and then enumerating all possible database concepts that match entity and relation strings. When the database and lexicon are large, this becomes intractable. We prune GEN(x; L) using the model parameters θ by only considering the N-best question patterns that match x, before additionally enumerating any relations or entities. For the end-to-end QA task, we return a ranked list of answers from the k highest scoring queries. We score an answer a with the highest score of all derivations that generate a query with answer a. 5 Learning PARALEX uses a two-part learning algorithm; it first induces an overly general lexicon (Section 5.1) and then learns to score derivations to increase accuracy (Section 5.2). Both algorithms rely on an initial seed lexicon, which we describe in Section 7.4. 5.1 Lexical Learning The lexical learning algorithm constructs a lexicon L from a corpus of question paraphrases C = {(x, x′) : x′ is a paraphrase of x}, where we assume that all paraphrased questions (x, x′) can be answered with a single, initially unknown, query (Table 1 shows example paraphrases). This assumption allows the algorithm to generalize from the initial seed lexicon L0, greatly increasing the lexical coverage. As an example, consider the paraphrase pair x = What is the population of New York? and x′ = How big is NYC? Suppose x can be mapped to a query under L0 using the following derivation y: what is the r of e = r(?, e) population = population new york = new-york We can induce new lexical items by aligning the patterns used in y to substrings in x′. For example, suppose we know that the words in (x, x′) align in the following way: Using this information, we can hypothesize that how r is e, big, and nyc should have the same interpretations as what is the r of e, population, and new york, respectively, and create the new entries: how r is e = r(?, e) big = population nyc = new-york We call this procedure InduceLex(x, x′, y, A), which takes a paraphrase pair (x, x′), a derivation y of x, and a word alignment A, and returns a new set of lexical entries. Before formally describing InduceLex we need to introduce some definitions. Let n and n′ be the number of words in x and x′. Let [k] denote the set of integers {1, . . . , k}. A word alignment A between x and x′ is a subset of [n] × [n′]. A phrase alignment is a pair of index sets (I, I′) where I ⊆[n] and I′ ⊆[n′]. A phrase alignment (I, I′) is consistent with a word alignment A if for all (i, i′) ∈A, i ∈I if and only if i′ ∈I′. In other words, a phrase alignment is consistent with a word alignment if the words in the phrases are aligned only with each other, and not with any outside words. We will now define InduceLex(x, x′, y, A) for the case where the derivation y consists of a 2argument question entry (pq, dq), a relation entry 1611 function LEARNLEXICON Inputs: - A corpus C of paraphrases (x, x′). (Table 1) - An initial lexicon L0 of (pattern, concept) pairs. - A word alignment function WordAlign(x, x′). (Section 6) - Initial parameters θ0. - A function GEN(x; L) that derives queries from a question x using lexicon L. (Section 4) - A function InduceLex(x, x′, y, A) that induces new lexical items from the paraphrases (x, x′) using their word alignment A and a derivation y of x. (Section 5.1) Output: A learned lexicon L. L = {} for all x, x′ ∈C do if GEN(x; L0) is not empty then A ←WordAlign(x, x′) y∗←arg maxy∈GEN(x;L0) θ0 · φ(x, y) L ←L ∪InduceLex(x, x′, y∗, A) return L Figure 1: Our lexicon learning algorithm. (pr, dr), and an entity entry (pe, de), as shown in the example above.2 InduceLex returns the set of all triples (p′ q, dq), (p′ r, dr), (p′ e, de) such that for all p′ q, p′ r, p′ e such that 1. p′ q, p′ r, p′ e are a partition of the words in x′. 2. The phrase pairs (pq, p′ q), (pr, p′ r), (pe, p′ e) are consistent with the word alignment A. 3. The p′ r and p′ e are contiguous spans of words in x′. Figure 1 shows the complete lexical learning algorithm. In practice, for a given paraphrase pair (x, x′) and alignment A, InduceLex will generate multiple sets of new lexical entries, resulting in a lexicon with millions of entries. We use an existing statistical word alignment algorithm for WordAlign (see Section 6). In the next section, we will introduce a scalable approach for learning to score derivations to filter out lexical items that generalize poorly. 5.2 Parameter Learning Parameter learning is necessary for filtering out derivations that use incorrect lexical entries like new mexico = mexico, which arise from noise in the paraphrases and noise in the word alignment. 2InduceLex has similar behavior for the other type of derivation, which consists of a 1-argument question entry (pq, dq) and an entity (pe, de). We use the hidden variable structured perceptron algorithm to learn θ from a list of (question x, query z) training examples. We adopt the iterative parameter mixing variation of the perceptron (McDonald et al., 2010) to scale to a large number of training examples. Figure 2 shows the parameter learning algorithm. The parameter learning algorithm operates in two stages. First, we use the initial lexicon L0 to automatically generate (question x, query z) training examples from the paraphrase corpus C. Then we feed the training examples into the learning algorithm, which estimates parameters for the learned lexicon L. Because the number of training examples is large, we adopt a parallel perceptron approach. We first randomly partition the training data T into K equally-sized subsets T1, . . . , TK. We then perform perceptron learning on each partition in parallel. Finally, the learned weights from each parallel run are aggregated by taking a uniformly weighted average of each partition’s parameter vector. This procedure is repeated for T iterations. The training data consists of (question x, query z) pairs, but our scoring model is over (question x, derivation y) pairs, which are unobserved in the training data. We use a hidden variable version of the perceptron algorithm (Collins, 2002), where the model parameters are updated using the highest scoring derivation y∗that will generate the correct query z using the learned lexicon L. 6 Data For our database D, we use the publicly available set of 15 million REVERB extractions (Fader et al., 2011).3 The database consists of a set of triples r(e1, e2) over a vocabulary of approximately 600K relations and 2M entities, extracted from the ClueWeb09 corpus.4 The REVERB database contains a large cross-section of general world-knowledge, and thus is a good testbed for developing an open-domain QA system. However, the extractions are noisy, unnormalized (e.g., the strings obama, barack-obama, and president-obama all appear as distinct entities), and ambiguous (e.g., the relation born-in contains facts about both dates and locations). 3We used version 1.1, downloaded from http:// reverb.cs.washington.edu/. 4The full set of REVERB extractions from ClueWeb09 contains over six billion triples. We used the smaller subset of triples to simplify our experiments. 1612 function LEARNPARAMETERS Inputs: - A corpus C of paraphrases (x, x′). (Table 1) - An initial lexicon L0 of (pattern, db concept) pairs. - A learned lexicon L of (pattern, db concept) pairs. - Initial parameters θ0. - Number of perceptron epochs T. - Number of training-data shards K. - A function GEN(x; L) that derives queries from a question x using lexicon L. (Section 4) - A function PerceptronEpoch(T , θ, L) that runs a single epoch of the hidden-variable structured perceptron algorithm on training set T with initial parameters θ, returning a new parameter vector θ′. (Section 5.2) Output: A learned parameter vector θ. // Step 1: Generate Training Examples T T = {} for all x, x′ ∈C do if GEN(x; L0) is not empty then y∗←arg maxy∈GEN(x;L0) θ0 · φ(x, y) z∗←query of y∗ Add (x′, z∗) to T // Step 2: Learn Parameters from T Randomly partition T into shards T1, . . . , TK for t = 1 . . . T do // Executed on k processors θk,t = PerceptronEpoch(Tk, θt−1, L) // Average the weights θt = 1 K P k θk,t return θT Figure 2: Our parameter learning algorithm. Our paraphrase corpus C was constructed from the collaboratively edited QA site WikiAnswers. WikiAnswers users can tag pairs of questions as alternate wordings of each other. We harvested a set of 18M of these question-paraphrase pairs, with 2.4M distinct questions in the corpus. To estimate the precision of the paraphrase corpus, we randomly sampled a set of 100 pairs and manually tagged them as ‘paraphrase’ or ‘notparaphrase.’ We found that 55% of the sampled pairs are valid paraphrased. Most of the incorrect paraphrases were questions that were related, but not paraphrased e.g. How big is the biggest mall? and Most expensive mall in the world? We word-aligned each paraphrase pair using the MGIZA++ implementation of IBM Model 4 (Och and Ney, 2000; Gao and Vogel, 2008). The word-alignment algorithm was run in each direction (x, x′) and (x′, x) and then combined using the grow-diag-final-and heuristic (Koehn et al., 2003). 7 Experimental Setup We compare the following systems: • PARALEX: the full system, using the lexical learning and parameter learning algorithms from Section 5. • NoParam: PARALEX without the learned parameters. • InitOnly: PARALEX using only the initial seed lexicon. We evaluate the systems’ performance on the endtask of QA on WikiAnswers questions. 7.1 Test Set A major challenge for evaluation is that the REVERB database is incomplete. A system may correctly map a test question to a valid query, only to return 0 results when executed against the incomplete database. We factor out this source of error by semi-automatically constructing a sample of questions that are known to be answerable using the REVERB database, and thus allows for a meaningful comparison on the task of question understanding. To create the evaluation set, we identified questions x in a held out portion of the WikiAnswers corpus such that (1) x can be mapped to some query z using an initial lexicon (described in Section 7.4), and (2) when z is executed against the database, it returns at least one answer. We then add x and all of its paraphrases as our evaluation set. For example, the question What is the language of Hong-Kong satisfies these requirements, so we added these questions to the evaluation set: What is the language of Hong-Kong? What language do people in hong kong use? How many languages are spoken in hong kong? How many languages hong kong people use? In Hong Kong what language is spoken? Language of Hong-kong? This methodology allows us to evaluate the systems’ ability to handle syntactic and lexical variations of questions that should have the same answers. We created 37 question clusters, resulting in a total of 698 questions. We removed all of these questions and their paraphrases from the training set. We also manually filtered out any incorrect paraphrases that appeared in the test clusters. We then created a gold-standard set of (x, a, l) triples, where x is a question, a is an answer, and l 1613 Question Pattern Database Query who r e r(?, e) what r e r(?, e) who does e r r(e, ?) what does e r r(e, ?) what is the r of e r(?, e) who is the r of e r(?, e) what is r by e r(e, ?) who is e’s r r(?, e) what is e’s r r(?, e) who is r by e r(e, ?) when did e r r-in(e, ?) when did e r r-on(e, ?) when was e r r-in(e, ?) when was e r r-on(e, ?) where was e r r-in(e, ?) where did e r r-in(e, ?) Table 3: The question patterns used in the initial lexicon L0. is a label (correct or incorrect). To create the goldstandard, we first ran each system on the evaluation questions to generate (x, a) pairs. Then we manually tagged each pair with a label l. This resulted in a set of approximately 2, 000 human judgments. If (x, a) was tagged with label l and x′ is a paraphrase of x, we automatically added the labeling (x′, a, l), since questions in the same cluster should have the same answer sets. This process resulted in a gold standard set of approximately 48, 000 (x, a, l) triples. 7.2 Metrics We use two types of metrics to score the systems. The first metric measures the precision and recall of each system’s highest ranked answer. Precision is the fraction of predicted answers that are correct and recall is the fraction of questions where a correct answer was predicted. The second metric measures the accuracy of the entire ranked answer set returned for a question. We compute the mean average precision (MAP) of each systems’ output, which measures the average precision over all levels of recall. 7.3 Features and Settings The feature representation φ(x, y) consists of indicator functions for each lexical entry (p, d) ∈L used in the derivation y. For parameter learning, we use an initial weight vector θ0 = 0, use T = 20 F1 Precision Recall MAP PARALEX 0.54 0.77 0.42 0.22 NoParam 0.30 0.53 0.20 0.08 InitOnly 0.18 0.84 0.10 0.04 Table 4: Performance on WikiAnswers questions known to be answerable using REVERB. F1 Precision Recall MAP PARALEX 0.54 0.77 0.42 0.22 No 2-Arg. 0.40 0.86 0.26 0.12 No 1-Arg 0.35 0.81 0.22 0.11 No Relations 0.18 0.84 0.10 0.03 No Entity 0.36 0.55 0.27 0.15 Table 5: Ablation of the learned lexical items. 0.0 0.1 0.2 0.3 0.4 0.5 Recall 0.5 0.6 0.7 0.8 0.9 1.0 Precision PARALEX No 2-Arg. Initial Lexicon Figure 3: Precision-recall curves for PARALEX with and without 2-argument question patterns. iterations and shard the training data into K = 10 pieces. We limit each system to return the top 100 database queries for each test sentence. All input words are lowercased and lemmatized. 7.4 Initial Lexicon Both the lexical learning and parameter learning algorithms rely on an initial seed lexicon L0. The initial lexicon allows the learning algorithms to bootstrap from the paraphrase corpus. We construct L0 from a set of 16 hand-written 2-argument question patterns and the output of the identity transformation on the entity and relation strings in the database. Table 3 shows the question patterns that were used in L0. 8 Results Table 4 shows the performance of PARALEX on the test questions. PARALEX outperforms the baseline systems in terms of both F1 and MAP. The lexicon-learning algorithm boosts the recall by a factor of 4 over the initial lexicon, showing the utility of the InduceLex algorithm. The 1614 String Learned Database Relations for String get rid of treatment-for, cause, get-rid-of, cure-for, easiest-way-to-get-rid-of word word-for, slang-term-for, definition-of, meaning-of, synonym-of speak speak-language-in, language-speak-in, principal-language-of, dialect-of useful main-use-of, purpose-of, importance-of, property-of, usefulness-of String Learned Database Entities for String smoking smoking, tobacco-smoking, cigarette, smoking-cigar, smoke, quit-smoking radiation radiation, electromagnetic-radiation, nuclear-radiation vancouver vancouver, vancouver-city, vancouver-island, vancouver-british-columbia protein protein, protein-synthesis, plasma-protein, monomer, dna Table 6: Examples of relation and entity synonyms learned from the WikiAnswers paraphrase corpus. parameter-learning algorithm also results in a large gain in both precision and recall: InduceLex generates a noisy set of patterns, so selecting the best query for a question is more challenging. Table 5 shows an ablation of the different types of lexical items learned by PARALEX. For each row, we removed the learned lexical items from each of the types described in Section 4, keeping only the initial seed lexical items. The learned 2argument question templates significantly increase the recall of the system. This increased recall came at a cost, lowering precision from 0.86 to 0.77. Thresholding the query score allows us to trade precision for recall, as shown in Figure 3. Table 6 shows some examples of the learned entity and relation synonyms. The 2-argument question templates help PARALEX generalize over different variations of the same question, like the test questions shown in Table 7. For each question, PARALEX combines a 2-argument question template (shown below the questions) with the rules celebrate = holiday-of and christians = christians to derive a full query. Factoring the problem this way allows PARALEX to reuse the same rules in different syntactic configurations. Note that the imperfect training data can lead to overly-specific templates like what are the religious r of e, which can lower accuracy. 9 Error Analysis To understand how close we are to the goal of open-domain QA, we ran PARALEX on an unrestricted sample of questions from WikiAnswers. We used the same methodology as described in the previous section, where PARALEX returns the top answer for each question using REVERB. We found that PARALEX performs significantly worse on this dataset, with recall maxing out at apCelebrations for Christians? r for e? Celebrations of Christians? r of e? What are some celebrations for Christians? what are some r for e? What are some celebrations of the Christians? what are some r of e? What are some of Christians celebrations? what are some of e r? What celebrations do Christians do? what r do e do? What did Christians celebrate? what did e r? What are the religious celebrations of Christians? what are the religious r of e? What celebration do Christians celebrate? what r do e celebrate? Table 7: Questions from the test set with 2argument question patterns that PARALEX used to derive a correct query. proximately 6% of the questions answered at precision 0.4. This is not surprising, since the test questions are not restricted to topics covered by the REVERB database, and may be too complex to be answered by any database of relational triples. We performed an error analysis on a sample of 100 questions that were either incorrectly answered or unanswered. We examined the candidate queries that PARALEX generated for each question and tagged each query as correct (would return a valid answer given a correct and complete database) or incorrect. Because the input questions are unrestricted, we also judged whether the questions could be faithfully represented as a r(?, e) or r(e, ?) query over the database vocabulary. Table 8 shows the distribution of errors. The largest source of error (36%) were on com1615 plex questions that could not be represented as a query for various reasons. We categorized these questions into groups. The largest group (14%) were questions that need n-ary or higher-order database relations, for example How long does it take to drive from Sacramento to Cancun? or What do cats and dogs have in common? Approximately 13% of the questions were how-to questions like How do you make axes in minecraft? whose answers are a sequence of steps, instead of a database entity. Lastly, 9% of the questions require database operators like joins, for example When were Bobby Orr’s children born? The second largest source of error (32%) were questions that could be represented as a query, but where PARALEX was unable to derive any correct queries. For example, the question Things grown on Nigerian farms? was not mapped to any queries, even though the REVERB database contains the relation grown-in and the entity nigeria. We found that 13% of the incorrect questions were cases where the entity was not recognized, 12% were cases where the relation was not recognized, and 6% were cases where both the entity and relation were not recognized. We found that 28% of the errors were cases where PARALEX derived a query that we judged to be correct, but returned no answers when executed against the database. For example, given the question How much can a dietician earn? PARALEX derived the query salary-of(?, dietician) but this returned no answers in the REVERB database. Finally, approximately 4% of the questions included typos or were judged to be inscrutable, for example Barovier hiriacy of evidence based for pressure sore? Discussion Our experiments show that the learning algorithms described in Section 5 allow PARALEX to generalize beyond an initial lexicon and answer questions with significantly higher accuracy. Our error analysis on an unrestricted set of WikiAnswers questions shows that PARALEX is still far from the goal of truly high-recall, opendomain QA. We found that many questions asked on WikiAnswers are either too complex to be mapped to a simple relational query, or are not covered by the REVERB database. Further, approximately one third of the missing recall is due to entity and relation recognition errors. Incorrectly Answered/Unanswered Questions 36% Complex Questions Need n-ary or higher-order relations (14%) Answer is a set of instructions (13%) Need database operators e.g. joins (9%) 32% Entity or Relation Recognition Errors Entity recognition errors (13%) Relation recognition errors (12%) Entity & relation recognition errors (7%) 28% Incomplete Database Derived a correct query, but no answers 4% Typos/Inscrutable Questions Table 8: Error distribution of PARALEX on an unrestricted sample of questions from the WikiAnswers dataset. 10 Conclusion We introduced a new learning approach that induces a complete question-answering system from a large corpus of noisy question-paraphrases. Using only a seed lexicon, the approach automatically learns a lexicon and linear ranking function that demonstrated high accuracy on a held-out evaluation set. A number of open challenges remain. First, precision could likely be improved by adding new features to the ranking function. Second, we would like to generalize the question understanding framework to produce more complex queries, constructed within a compositional semantic framework, but without sacrificing scalability. Third, we would also like to extend the system with other large databases like Freebase or DBpedia. Lastly, we believe that it would be possible to leverage the user-provided answers from WikiAnswers as a source of supervision. Acknowledgments This research was supported in part by ONR grant N00014-11-1-0294, DARPA contract FA8750-09C-0179, a gift from Google, a gift from Vulcan Inc., and carried out at the University of Washington’s Turing Center. We would like to thank Yoav Artzi, Tom Kwiatkowski, Yuval Marton, Mausam, Dan Weld, and the anonymous reviewers for their helpful comments. 1616 References Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open Information Extraction from the Web. In Proceedings of the 20th international joint conference on Artifical intelligence. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with Bilingual Parallel Corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Regina Barzilay and Lillian Lee. 2003. Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics. Regina Barzilay and Kathleen R. McKeown. 2001. Extracting Paraphrases from a Parallel Corpus. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics. Eric Brill, Susan Dumais, and Michele Banko. 2002. An Analysis of the AskMSR Question-Answering System. In Proceedings of Empirical Methods in Natural Language Processing. Qingqing Cai and Alexander Yates. 2013. Large-scale Semantic Parsing via Schema Matching and Lexicon Extension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Chris Callison-Burch. 2008. Syntactic Constraints on Paraphrases Extracted from Parallel Corpora. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving Semantic Parsing from the World’s Response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying Relations for Open Information Extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Qin Gao and Stephan Vogel. 2008. Parallel Implementations of Word Alignment Tool. In Proc. of the ACL 2008 Software Engineering, Testing, and Quality Assurance Workshop. Barbara J. Grosz, Douglas E. Appelt, Paul A. Martin, and Fernando C. N. Pereira. 1987. TEAM: An Experiment in the Design of Transportable Natural-Language Interfaces. Artificial Intelligence, 32(2):173–243. Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Boris Katz. 1997. Annotating the World Wide Web using Natural Language. In RIAO, pages 136–159. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics. Cody Kwok, Oren Etzioni, and Daniel S. Weld. 2001. Scaling Question Answering to the Web. ACM Trans. Inf. Syst., 19(3):242–262. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning Dependency-Based Compositional Semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Yuval Marton, Chris Callison-Burch, and Philip Resnik. 2009. Improved Statistical Machine Translation Using Monolingually-Derived Paraphrases. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Ryan McDonald, Keith Hall, and Gideon Mann. 2010. Distributed training strategies for the structured perceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant Supervision for Relation Extraction Without Labeled Data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL. Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. Ana-Maria Popescu, Alex Armanasu, Oren Etzioni, David Ko, and Alexander Yates. 2004. Modern Natural Language Interfaces to Databases: Composing Statistical Parsing with Semantic Tractability. In Proceedings of the Twentieth International Conference on Computational Linguistics. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual Machine Translation for Paraphrase Generation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. 1617 Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling Relations and Their Mentions without Labeled Text. In Proceedings of the 2010 European conference on Machine learning and Knowledge Discovery in Databases. Lappoon R. Tang and Raymond J. Mooney. 2001. Using Multiple Clause Constructors in Inductive Logic Programming for Semantic Parsing. Christina Unger, Lorenz B¨uhmann, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Daniel Gerber, and Philipp Cimiano. 2012. Template-Based Question Answering over RDF Data. In Proceedings of the 21st World Wide Web Conference 2012. Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural Language Questions for the Web of Data. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2012. Unsupervised Relation Discovery with Sense Disambiguation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. John M. Zelle and Raymond J. Mooney. 1996. Learning to Parse Database Queries Using Inductive Logic Programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. In Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence. 1618
2013
158
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1619–1629, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Aid is Out There: Looking for Help from Tweets during a Large Scale Disaster Istv´an Varga† Motoki Sano† Kentaro Torisawa† Chikara Hashimoto† Kiyonori Ohtake† Takao Kawai§ Jong-Hoon Oh† Stijn De Saeger† †Information Analysis Laboratory, National Institute of Information and Communications Technology (NICT), Japan {istvan, msano, torisawa, ch, kiyonori.ohtake, rovellia, stijn}@nict.go.jp §Knowledge Discovery Research Laboratories, NEC Corporation, Japan [email protected] Abstract The 2011 Great East Japan Earthquake caused a wide range of problems, and as countermeasures, many aid activities were carried out. Many of these problems and aid activities were reported via Twitter. However, most problem reports and corresponding aid messages were not successfully exchanged between victims and local governments or humanitarian organizations, overwhelmed by the vast amount of information. As a result, victims could not receive necessary aid and humanitarian organizations wasted resources on redundant efforts. In this paper, we propose a method for discovering matches between problem reports and aid messages. Our system contributes to problem-solving in a large scale disaster situation by facilitating communication between victims and humanitarian organizations. 1 Introduction The 2011 Great East Japan Earthquake in March 11, 2011 killed 15,883 people and destroyed over 260,000 households (National Police Agency of Japan, 2013). Accustomed way of living suddenly became unmanageable and people found themselves in extreme conditions for months. Just after the disaster, many people used Twitter for posting problem reports and aid messages as it functioned while most communication channels suffered disruptions (Winn, 2011; Acar and Muraki, 2011; Sano et al., 2012). Examples of such problem reports and aid messages, translated from Japanese tweets, are given below (P1, A1). P1 My friend said infant formula is sold out. If somebody knows shops in Sendai-city where they still have it in stock, please let us know. A1 At Jusco supermarket in Sendai, you can still buy water and infant formula. If A1 would have been forwarded to the sender of P1, it could have helped since it would help the “friend” to obtain infant formula. But in reality, the majority of such reports/messages, especially unforeseen ones went unnoticed amongst the mass of information (Ohtake et al., 2013). In addition, there were cases where many humanitarian organizations responded to the same problems and wasted precious resources. For instance, many volunteers responded to problems which were heavily reported by public media, leading to oversupply (Saijo, 2012). Such waste of resources could have been avoided if the organizations would have successfully shared the aid messages for the same problems. Such observations motivated this work. We developed methods for recognizing problem reports and aid messages in tweets and finding proper matches between them. By browsing the discovered matches, victims can be assisted to overcome their problems, and humanitarian organizations can avoid redundant relief efforts. We define problem reports, aid messages and their successful matches as follows. Problem report: A tweet that informs about the possibility or emergence of a problem that requires a treatment or countermeasure. Aid message: A tweet that (1) informs about situations or actions that can be a remedy or solution for a problem, or (2) informs that the problem is solved or is about to be solved. Problem-aid tweet match: A tweet pair is a problem-aid tweet match (1) if the aid message informs how to overcome the problem, (2) if the aid message informs about the set1619 tlement of the problem, or (3) if the aid message provides information which contributes to the settlement of the problem. In this work we excluded direct requests, such as “Send us food!”, from problem reports. This is because it is relatively easy to recognize such direct requests by checking mood types (i.e., imperative) and their behavior is quite different from problem reports like “People in Sendai are starving”. Problem reports in this work do not directly state which actions are required, only implying the necessity of a countermeasure through claiming the existence of problems. An underlying assumption of our method is that we can find a noun-predicate dependency relation that works as an indicator of problems and aids in problem reports and aid messages, which we refer to as problem nucleus and aid nucleus.1 An example of problem nucleus is “infant formula is sold out” in P1, and that of aid nucleus is “(can) buy infant formula” in A1. Many problem-aid tweet matches can be recognized through problem and aid nuclei pairs. We also assume that if the problem and aid nuclei match, they share the same noun. Then, the semantics of predicates in the nuclei is the main factor that decides whether the nuclei constitute a match. We introduce a semantic classification of predicates according to the framework of excitation polarities proposed in Hashimoto et al. (2012). Our hypothesis is that excitation polarities along with trouble expressions can characterize problem reports, aid messages and their matches. We developed a supervised method encoding such information into its features. An evident alternative to this approach is to use sentiment analysis (Mandel et al., 2012; Tsagkalidou et al., 2011) assuming that problem reports should include something ‘bad’ while aid messages describe something ‘good’. However, we will show that this does not work well in our experiments. We think this is due to mismatch between the concepts of problem/aid and sentiment polarity. Note that previous work on ‘demand’ recognition also found similar tendencies (Kanayama and Nasukawa, 2008). Another issue in this task is, of course, the context surrounding problem/aid nuclei. The fol1We found that out of 500 random tweets only 4.5% of problem reports and 9.1% of aid messages did not contain any problem report/aid message nuclei. lowing (imaginary) tweets exemplify the problems caused by contexts. FP1 I do not believe infant formula is sold out in Sendai. FA1 At Jusco supermarket in Iwaki, you can still buy infant formula. The problem nuclei of FP1 and P1 are the same but FP1 is not a problem report because of the expression “I do not believe”. The aid nuclei of FA1 and A1 are the same but FA1 does not constitute a proper match with P1 because FA1 and P1 refer to different cities, “Iwaki” and “Sendai”. In this work, the problems concerning the modality and other semantic modifications to problem/aid nuclei by context are dealt with by the introduction of features representing the text surrounding the nuclei in machine learning. As for the location problem, we apply a location recognizer to all tweets and restrict the matching candidates to the tweet pairs referring to the same location. 2 Approach !"#$%"&"'()*&+ *,*&-.(.+*,/+ /0$0,/0,)-+ $*#.(,'+ !"#$%&'("&!#")("&*#+,-.&"( 12001.+ 12001+ /0$0,/0,)-+ #0&*3",+ $#"4&0!+#0$"#1+ $#"4&0!+,5)&05.+ /-0('&11/+&("&*#+,-.&"( *(/+!0..*'0++ *(/+,5)&05.+ $#"4&0!+#0$"#1+ $#"4&0!+,5)&05.+ *(/+!0..*'0+ *(/+,5)&05.+ !"#$%&'2/-0()3&&)('/)*4( 0()3&& $#"4&0!+*,/+*(/+,"5,+*#0+1%0+.*!06+.*!0+'0"'#*$%()*&+&")*3",+ &")*3",+ #0)"',(70#+ # !"#$%&'2/-0('/)*4("&*#+,-.&"( Figure 1: Problem-aid matching system overview. We developed machine learning based systems to recognize problem reports, aid messages and problem-aid tweet matches. Figure 1 illustrates the whole system. First, location names in tweets are identified by matching tweets against our location dictionary, described in Section 3. Then, each tweet is paired with each dependency relation in the tweet, which is a candidate of problem/aid nuclei and given to the problem report and aid message recognizers. A tweet-nucleus-candidate pair judged as problem report is combined with another tweet-nucleus-candidate pair recognized as an aid message if the two nuclei share the same noun and the tweets share the same location name, and given to the problem-aid match recognizer. 1620 In the following, problem and aid nuclei are denoted by a noun-template pair. A template is composed of a predicate and its argument position. For instance, “water supply stopped” in P2 is a problem nucleus, “water supply recovered” in A2 is an aid nucleus and they are denoted by the noun-template pairs ⟨water supply, X stopped⟩and ⟨water supply, X recovered⟩. P2 In Sendai city, water supply stopped. A2 In Sendai city, water supply recovered. Roughly speaking, we regard the tasks of problem report recognition and aid message recognition as the tasks of finding proper problem/aid nuclei in tweets and our method performs these tasks based on the semantic properties of nouns and templates in problem/aid nucleus candidates and their surrounding contexts. The basic intuition behind this approach can be explained using excitation polarity proposed in Hashimoto et al. (2012). Excitation polarity differentiates templates into ‘excitatory’ or ‘inhibitory’ with regard to the main function or effect of entities referred to by their argument noun. While excitatory templates (e.g., cause X, buy X, suffer from X) entail that the main function or effect is activated or enhanced, inhibitory templates (e.g., ruin X, prevent X, X runs out) entail that the main function or effect is deactivated or suppressed. The templates that do not fit into the above categorization are classified as ‘neutral’. We observed that problem reports in general included either of (A) a dependency relation between a noun referring to some trouble and an excitatory template or (B) a dependency relation between a noun not referring to any trouble and an inhibitory template. Examples of (A) include ⟨carbon monoxide poisoning, suffer from X⟩, ⟨false rumor, spread X⟩. They refer to events that activate troubles. On the other hand, (B) is exemplified by ⟨school, X is collapsed⟩, ⟨battery, X runs out⟩, which imply that some non-trouble objects such as resources, appliances and facilities are dysfunctional. We assume that if we can find such dependency relations in tweets, the tweets are likely to be problem reports. Contrary, a tweet is more likely to be an aid message when it includes either (C) a dependency relation between a noun referring to some trouble and an inhibitory template or (D) a dependency relation between a noun not referring to any troutrouble non-trouble excitatory (A) problem nucleus (D) aid nucleus inhibitory (C) aid nucleus (B) problem nucleus Table 1: Problem/aid-excitation matrix. ble and an excitatory template. Examples of (C) are ⟨flu, X was eradicated (in some shelter)⟩and ⟨debris, remove X⟩. They represent the dysfunction of troubles and can mean the solution or the settlement of troubles. On the other hand, examples of (D) include ⟨school, X re-build⟩and ⟨baby formula, buy X⟩. They entail that some resources function properly or become available. These formulations are summarized in Table 1. As an interesting consequence of such a view on problem/aid nucleus, we can say the following regarding problem-aid tweet matchings: when a problem nucleus and an aid nucleus are an adequate match, the excitation polarities of their templates are opposite. Consider the following tweets. P3 Some people were going back to Iwaki, but the water system has not come back yet. It’s terrible that bath is unusable. A3 We open the bath for the public, located on the 2F of Iwaki Kuhon temple. If you’re staying at a relief shelter and would like to take a bath, you can use it. “Bath is unusable” in P3 is a problem nucleus while “open the bath” in A3 is an aid nucleus. Since the problem reported in P3 can be solved with A3, they are a successful match. The inhibitory template “X is unusable” indicates that the function of “bath”, a non-trouble expression, is suppressed. The excitatory template “open X” indicates that the function of “bath” is activated. The same holds when we consider the noun referring to troubles like “flu”. The polarity of the template in a problem nucleus should be excitatory like “flu is raging” while that of an aid nucleus should be inhibitory like ⟨flu, X was eradicated⟩. These examples keep the constraint that the problem and aid nucleus should have opposite polarities when they constitute a match. Note that the formulations of problem report, aid message and their matches or the excitation matrix (Table 1) were not presented to our annotators and our test/training data may contain data that contradict with the formulations. These formulations constitute the hypothesis to be validated in this work. 1621 An important point to be stressed here is that there are problem-aid tweet matches that do not fit into our formulations. For instance, we assume that the problem nucleus and aid nucleus in a proper match share the same noun. However, tweet pairs such as “There are many injured people in Sendai city” and “We are sending ambulances to Sendai” can constitute a proper match, but there is no proper problem-aid nuclei pair that share the same noun in these tweets. (We can find the dependency relations sharing “Sendai” but they do not express anything about the contents of problem and aid.) The point is that the tweet pairs can be judged because people know ambulances can be a countermeasure to injured people as world knowledge. Introducing such world knowledge is beyond the scope of this current study. Also, we exclude direct requests from problem reports. As mentioned in the introduction, identifying direct requests is relatively easy, hence we excluded them from our target. 3 Problem Report and Aid Message Recognizers We recognize problem reports and aid messages in given tweets using a supervised classifier, SVMs with linear kernel, which worked best in our preliminary experiments. The feature set given to the SVMs are summarized in the top part of Table 2. Note that we used a common feature set for both the problem report recognizer and aid message recognizer and that it is categorized into several types: features concerning trouble expressions (TR), excitation polarity (EX), their combination (TREX1) and word sentiment polarity (WSP), features expressing morphological and syntactic structures of nuclei and their context surrounding problem/aid nuclei (MSA), features concerning semantic word classes (SWC) appearing in nuclei and their context, request phrases, such as “Please help us”, appearing in tweets (REQ), and geographical locations in tweets recognized by our location recognizer (GL). MSA is used to express the modality of nuclei and other contextual information surrounding nuclei. REQ was introduced based on our observation that if there are some requests in tweets, problem nuclei tend to appear as justification for the requests. We also attempted to represent nucleus template IDs, noun IDs and their combinations directly in our feature set to capture typical templates freTR Whether the nucleus noun is a trouble/non-trouble expression. EX1 The excitation polarity and the value of the excitation score of the nucleus template. TREX1 All possible combinations of trouble/non-trouble of TR and excitation polarities of EX1. WSP1 Whether the nucleus noun is positive/negative/not in the Word Sentiment Polarity (WSP) dictionary. WSP2 Whether the nucleus template is positive/negative/not in the WSP dictionary. WSP3 Whether the nucleus template is followed by a positive/negative word within the tweet. MSA1 Morpheme n-grams, syntactic dependency n-grams in the tweet and morpheme n-grams before and after the nucleus template. (1 ≤n ≤3) MSA2 Character n-grams of the nucleus template to capture conjugation and modality variations. (1 ≤n ≤3) MSA3 Morpheme and part-of-speech n-grams within the bunsetsu containing the nucleus template to capture conjugation and modality variations. (1 ≤n ≤3) (A bunsetsu is a syntactic constituent composed of a content word and several function words, the smallest unit of syntactic analysis in Japanese.) MSA4 The part-of-speech of the nucleus template’s head to capture modality variations outside the nucleus template’s bunsetsu. MSA5 The number of bunsetsu between the nucleus noun and the nucleus template. We found that a long distance between the noun and the template suggests parsing errors. MSA6 Re-occurrence of the nucleus noun’s postpositional particle between the nucleus noun and the nucleus template. We found that the re-occurrence of the same postpositional particle within a clause suggests parsing errors. SWC1 The semantic class n-grams in the tweet. SWC2 The semantic class(es) of the nucleus noun. REQ Presence of a request phrase in the tweet, identified from within 426 manually collected request phrases. GL Geographical locations in the tweet identified using our location recognizer. Existence/non-existence of locations in tweets are also encoded. EX2 Whether the problem and aid nucleus templates have the same or opposite excitation polarities. EX3 Product of the values of the excitation scores for the problem and the aid nucleus template. TREX2 All possible combinations of trouble/non-trouble of TR, excitation polarity EX1 of the problem nucleus template and excitation polarity EX1 of the aid nucleus template. SIM1 Common semantic word classes of the problem report and aid message. SIM2 Whether there are common nouns modifying the common nucleus noun or not in the problem report and aid message. SIM3 Whether the words in the same word class modify the common nucleus noun or not in the problem report and aid message. SIM4 The semantic similarity score between the problem nucleus template and the aid nucleus template. CTP Whether the problem nucleus template and the aid nucleus template are in contradiction relation dictionary or not. SSR1 Problem report recognizer’s SVM score of problem nucleus template. SSR2 Problem report recognizer’s SVM score of aid nucleus template. SSR3 Aid message recognizer’s SVM score of the problem nucleus template. SSR4 Aid message recognizer’s SVM score of the aid nucleus template. Table 2: Features used with the problem report recognizer and the aid message recognizer (above); additional features used in training the problem-aid match recognizer (below). quently appearing in problem and aid nuclei, but since there was no improvement we omit them. The other feature types need some non-trivial dictionaries. In the following, we explain how we created the dictionaries for each feature type along with the motivation behind their introduction. Trouble Expressions (TR) As mentioned previously, trouble expressions work as good evidence for recognizing problem reports and aid messages. The TR feature indicates whether the noun in the problem/aid nucleus candidate is a trouble ex1622 pression or not. For this purpose, we created a list of trouble expressions following the semisupervised procedure presented in De Saeger et al. (2008). After manual validation of the list, we obtained 20,249 expressions referring to some troubles, such as “tsunami” and “flu”. The value of the TR feature is determined by checking whether the nucleus noun is contained in the list. Excitation Polarities (EX) The excitation polarities are also important in recognizing problem reports and aid messages as mentioned before. For constructing the dictionary for excitation polarities of templates, we applied the bootstrapping procedure in Hashimoto et al. (2012) to 600 million Web pages. Hashimoto’s method provides the value of the excitation score in [−1, 1] for each template indicating the polarities and their strength. Positive value indicates excitatory, negative value inhibitory and small absolute value neutral. After manual checking of the results by the majority vote of three human annotators (other than the authors), we limited the templates to the ones that have score values consistent with the majority vote of the annotators, obtaining a dictionary consisting of 7,848 excitatory, 836 inhibitory and 7,230 neutral templates. The Fleiss’ (1971) kappa-score was 0.48 (moderate agreement). We used the excitation score values as feature values. Excitation has already been used in many works, such as causality and contradiction extraction (Hashimoto et al., 2012) or Why-QA (Oh et al., 2013). Word Sentiment Polarity (WSP) As we suggested before, full-fledged sentiment analysis to recognize the expressions, including clauses and phrases, that refer to something good or bad was not effective in our task. However, the sentiment polarity, assigned to single words turned out to be effective. To identify the sentiment polarity of words, we employed the word sentiment polarity dictionary used with a sentiment analysis tool for Japanese, the Opinion Extraction Tool software2, which is an implementation of Nakagawa et al. (2010). The dictionary includes 9,030 positive and 27,951 negative words. Note that we used the Opinion Extraction Tool in the experiments to check the effectiveness of the full-fledged sentiment analysis in this task. Semantic Word Class (SWC) We assume that nouns in the same semantic class behave simi2Provided at the ALAGIN Forum (http://www.alagin.jp/). larly in crisis situations. For example, if “infection” appears in a problem report, the tweets including “pulmonary embolism” are also likely to be problem reports. Semantic word class features are used to capture such tendencies. We applied an EM-style word clustering algorithm in Kazama and Torisawa (2008) to 600 million Web pages and clustered 1 million nouns into 500 classes. This algorithm has been used in many works, such as relation extraction (De Saeger et al., 2011) and Why-QA (Oh et al., 2012), and can generate various kinds of semantically clean word classes, such as foods, disease names, and natural disasters. We used the word classes in tweets as features.3 Geographical Locations (GL) Our location recognizer matches tweets against our location dictionary. Location names and their existence/non-existence in tweets constitute evidence, thus we encoded such information into our features. The location dictionary was created from the Japan Post code data4 and Wikipedia, containing 2.7 million location names including cities, schools and other facilities (Kazama et al., 2013). 4 Problem-Aid Match Recognizer After problem report and aid message recognition, the positive outputs of the respective classifiers are used as input in this step. The problemaid match recognizer classifies an aid messagenucleus pair together with the problem reportnucleus pair employing SVMs with linear kernel, which performed best in this task again. The problem-aid match recognizer uses all the features used in the problem report recognizer and the aid message recognizer along with additional features regarding: excitation polarity (EX) and trouble expressions (TR), distributional similarity (SIM), contradiction (CTP) and SVM-scores of the problem report and aid message recognizers (SSR). Here also we attempted to capture typical or frequent matches of nuclei using template and noun IDs and their combinations, but we did not observe any improvement so we omit them from the feature set. The bottom part of Table 2 summarizes the additional feature set, some of which are described below in more detail. 3There is a slight complication here. For each noun n, EM clustering estimates a probability distribution P(n|c∗) for n and semantic class c∗. From this distribution we obtained discrete semantic word classes by assigning each noun n to semantic class c = argmaxc∗p(c∗|n). 4http://www.post.japanpost.jp/zipcode/download.html 1623 As for TR and EX, our intuition is that if a problem nucleus and an aid nucleus are an adequate match, their excitation polarities are opposite, as described in Section 2. We encode whether the excitation polarities of nuclei templates are the same or not in our features. Also, the excitation polarities of problem and aid nuclei and TR are combined (TREX1, TREX2) so that the classifier can know whether the nuclei follow the constraint for adequate matches described in Section 2. As for SIM, if an aid message matches a problem report, besides the common nucleus noun, it is reasonable to assume that certain contexts are semantically similar. We capture this characteristic in three ways. SIM1 looks for common semantic word classes in the problem report and aid message. SIM2 and SIM3 target the modifiers of the common nucleus noun if they exist. We also observed that if an aid message matches a problem report, the problem nucleus template and aid nucleus template are often distributionally similar. A typical example is “X is sold out” and “buy X”. SIM4 captures this tendency. As the distributional similarity between templates, we used a Bayesian distributional similarity measure proposed by Kazama et al. (2010).5 CTP indicates whether the problem and aid nuclei are in contradiction relation or not. This feature was implemented based on the observation that when problem and aid nuclei are in contradiction relation, they are often proper matches (e.g., ⟨blackout, “X starts”⟩and ⟨blackout, “X ends”⟩). CTP indicates whether nucleus pairs are in the one million contradiction phrase pairs6 automatically obtained by applying a method proposed by Hashimoto et al. (2012) to 600 million Web pages. 5 Experiments We evaluated our problem report recognizer and problem-aid match recognizer. For the sake of space, we give only the performance figures of the aid message recognizer at the end of Section 5.1. We collected tweets posted during and after the 2011 Great East Japan Earthquake, between March 10 and April 4, 2011. After applying keyword-based filtering with a list of over 300 5The original similarity was defined over noun pairs and it was estimated from dependency relations. Obtaining similarity between template pairs, not noun pairs, is straightforward given the same dependency relations. We used 600 million Web pages for this similarity estimation. 6The precision of the pairs was reported as around 70%. disaster related keywords, we obtained 55 million tweets. After dependency parsing7, we used them in our evaluation. 5.1 Problem Report Recognition Firstly, we evaluated our problem report recognizer. Particularly, we assessed the effect of excitation polarities and trouble expressions in two settings. The first is against a naturally distributed gold standard data. The second targets problem reports with problem nuclei unseen in the training data. In both experiments we observed that the performance drops when excitation polarities and trouble expressions are removed from the feature set. The performance drop was larger in the second experiment which suggests that the excitation polarities and trouble expressions are more effective against unseen problem reports. Training and test data for problem report recognition consist of tweet-nucleus candidate pairs randomly sampled from our 55 million tweet data. The training data (R) and test data (T) consist of 13,000 and 1,000 pairs, respectively, manually labeled by three annotators (other than the authors) as problem or other. Final judgment was made by majority vote. The Fleiss’ kappa score for training and test data for annotation judgement is 0.74 (substantial agreement). Our problem report recognizer and its variants are listed in Table 3. Table 4 shows the evaluation results. The proposed method achieved about 44% recall and nearly 80% precision, outperforming all other systems in terms of precision, F-score and average precision8. The improvement in precision when using TR&EX is statistically significant (p < 0.05).9 Note that F-measure dropped PROPOSED: Our proposed method with all features used. PROPOSED-*: The proposed method without the feature set denoted by “*”. Here EX and TR denote all excitation polarity and trouble expression related features, respectively, including their combinations (TREX1). PROPOSED+OET: The proposed method incorporating the classification results of problem nucleus candidates by the Opinion Extraction Tool as additional binary features. RULE-BASED: The method that regards only nuclei satisfying the constraint in Table 1 as problem nuclei. Table 3: Evaluated problem report recognizers. 7http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KNP 8We calculate average precision using the formula: aP = ∑n k=1(P rec(k)×rel(k)) n , where Prec(k) is the precision at cut-off k and rel(k) is an indicator function equaling 1 if the item at rank k is relevant, zero otherwise. 9Throughout this paper we performed two-tailed test of 1624 Recognition system R (%) P (%) F (%) aP (%) PROPOSED 44.26 79.41 56.83 71.82 PROPOSED-TR&EX 45.08 74.83 56.26 69.67 PROPOSED-EX 44.67 74.66 55.89 69.90 PROPOSED-TR 43.85 74.31 55.15 69.44 PROPOSED-MSA 28.69 70.71 40.81 57.74 PROPOSED-SWC 43.42 75.97 55.25 70.61 PROPOSED-WSP 43.14 77.83 55.50 70.45 PROPOSED-REQ 42.64 76.16 55.50 54.67 PROPOSED-GL 44.14 78.34 55.50 56.46 PROPOSED+OET 44.24 79.41 56.82 71.81 RULE-BASED 30.32 67.96 41.93 n/a Table 4: Recall (R), precision (P), F-score (F) and average precision (aP) of the problem report recognizers. whenever each type of feature was removed, implying that each type of feature is effective in this task. Especially note the performance drop if we remove excitation polarities (EX), trouble expression (TR) and both excitation and trouble expression features (TR&EX), confirming that they are crucial in recognizing problem reports with high accuracy. Also note that the performance of PROPOSED+OET was actually slightly worse than that of the proposed method. This suggests that fullfledged sentiment analysis is not effective at least in this setting. The rule-based method achieved relatively high precision despite of the low recall, demonstrating the importance of problem and aid nuclei formulations described in Section 1. The second experiment assessed the efficiency of our problem report recognizer against unseen problem nuclei under the condition that every template in nuclei has excitation polarity. We sampled the training and test data so that the problem nucleus nouns and templates in the training and test data are disjoint. First we created a subset of the test data by selecting the samples which had nuclei with excitation templates. We call this subset T ′. Next, we removed samples from training data R if either of their problem nouns or templates appeared in the nuclei of T ′. The resulting new training data (called R′) and test data (T ′) consist of 6,484 and 407 tweet-nucleus candidate pairs, respectively. We trained our problem report recognizer using R′ and tested its performance using T ′. Figure 2 shows the precision-recall curves obtained by changing the threshold on the SVM scores. The effectiveness of excitation polarities and trouble expressions was more evident in this setting. The PROPOSED’s performance was actually better in this setting (almost 50% recall at population proportion (Ott and Longnecker, 2010) using SVM-threshold=0. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall PROPOSED-TR PROPOSED-EX PROPOSED-TR&EX PROPOSED Figure 2: Precision-recall curves of problem report recognizers against unseen problem nuclei. more that 80% precision), than the previous setting, showing that excitation templates and trouble expressions are crucial in achieving high performance especially for unseen problem nuclei. The same was confirmed when we removed excitation polarity and trouble expression related features, with performance dropping by 7.43 points in terms of average precision. The improvement in precision when using TR&EX is statistically significant (p < 0.01). This implies, assuming that we have a wide-coverage dictionary of templates with excitation polarities, that excitation polarities are important in dealing with unexpected problems in disaster situations. We also evaluated the aid-message recognizer, using tweet-nucleus pairs in R and T as training and test data and the annotation scheme was also the same. The average Fleiss’ kappa score was 0.55 (moderate agreement). Our recognizer achieved 53.82% recall and 65.67% precision and showed similar tendencies with the problem report recognizer, with the excitation polarities and trouble expressions contributing to higher accuracy. We can conclude that excitation polarities and trouble expressions are important in identifying problem reports and aid messages during disaster situations. 5.2 Problem-Aid Matching Next, we evaluated the performance of the problem-aid match recognizer. We applied our problem report recognizer and aid message recognizer to all 55 million tweets and combined the tweet-nucleus pairs judged as problem reports and aid messages, respectively, to create the training and test data. The training data consists of two parts (M1 and M2). M1 includes many variations of the aid messages for each problem report, while M2 en1625 sures diversity in nouns and templates in problem nuclei. For M1, we randomly picked up problem reports from the output of the problem report recognizer and to each we attached up to 30 randomly picked, distinct aid messages that have the same nucleus noun. Building M2 follows the construction method of M1, except that: (1) we used up to 30 distinct problem nuclei for each noun; (2) for each problem report we attached only one randomly picked aid message. In creating the test data T2, we followed the construction method used for M2 to assess the performance of our proposal with a large variety of problems. M1, M2 and T2 consist of 3,000, 6,000 and 1,000 samples, respectively. The annotation was done by majority vote of three human annotators (other than the authors), the average Fleiss’ kappa-score for training and test data was 0.63 (substantial agreement). We trained the problem-aid match recognizers of Table 5 with M1 and M2. The evaluation results performed on T2 are shown in Table 6. We can observe that, among the nuclei related features, the trouble expression (TR) and excitation polarity (EX) features and their combination (TR&EX) contribute most to the performance, although the contribution of nuclei related features is less in comparison to the problem report and aid message recognition. The improvement in precision when using TR&EX is marginally significant (p = 0.056). Instead, morphological and syntactic analysis (MSA) and semantic word class (SWC) features greatly improved performance. As the final experiments, we evaluated topranking matches of our problem-aid match recognizer, where the recognizer classified all the possible combinations of tweet-nuclei pairs taken from 55 million tweets. In addition, we assessed the effectiveness of excitation polarities and trouble expressions by comparing all positive matches produced by our full problem-aid match recognizer (PROPOSED) and those produced by the problemaid match recognizer (PROPOSED-TR&EX) that PROPOSED: Our proposed method with all features used. PROPOSED-*: The proposed method without the feature set denoted by “*”. Here also EX and TR denote all excitation polarity and trouble expression related features, respectively, including their combinations (TREX1 and TREX2). RULE-BASED: The method that judges only problem-aid nuclei combinations with opposite excitation polarities as proper matches. Table 5: Evaluated problem-aid match recognizers. Matching system R (%) P (%) F (%) aP (%) PROPOSED 30.67 70.42 42.92 55.16 PROPOSED-TR&EX 28.83 67.14 40.33 53.99 PROPOSED-EX 31.29 67.11 42.68 54.19 PROPOSED-TR 30.56 69.33 42.42 54.85 PROPOSED-MSA 13.50 53.66 21.57 44.52 PROPOSED-SWC 26.99 67.69 38.59 52.23 PROPOSED-WSP 30.61 69.51 42.50 54.81 PROPOSED-CTP 30.06 70.00 42.05 54.94 PROPOSED-SIM 29.95 70.11 41.97 54.98 PROPOSED-REQ 30.58 70.25 42.61 54.67 PROPOSED-GL 30.61 70.31 42.65 55.02 PROPOSED-SSR 30.67 69.44 42.72 54.91 RULE-BASED 15.33 17.36 16.28 n/a Table 6: Recall (R), precision (P), F-score (F) and average precision (aP) of the problem-aid match recognizers. did not use excitation polarities and trouble expressions in its feature set. Note that PROPOSEDTR&EX was fed by the problem report and aid message recognizers that didn’t use excitation polarities and trouble expressions. For both systems’ training data we used R for the problem report and aid message recognizers; M1 and M2 for the problem-aid matching recognizers. PROPOSED and PROPOSED-TR&EX output 15.2 million and 13.4 million positive matches, covering 1,691 and 1,442 nucleus nouns, respectively. Table 7 shows match samples identified with PROPOSED. We observed that the output of each system was dominated by just a handful of frequent nucleus nouns, such as “water” or “gasoline”. We preferred to assess the performance of our system against a large variation of problem-aid nuclei, thus we restricted the number of matches to 10 for each noun10. After this restriction the number of matches found by PROPOSED and PROPOSED 0 0.2 0.4 0.6 0.8 1 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 Precision Rank PROPOSED (unseen) PROPOSED-TR&EX (unseen) PROPOSED (all) PROPOSED-TR&EX (all) Figure 3: Problem-aid match recognition performance for ‘all’ and ‘unseen’ problem reports. 10Note that this setting is a pessimistic estimation of our system’s overall performance, since according to our observations problem reports with very frequent nucleus nouns had proper matches with a higher accuracy than problem reports with less frequent nucleus nouns. 1626 Problem report: いわきの常磐病院、いわき泌尿器科病院、 竹林貞吉記念クリニック、泉中央クリニックは、17日か ら透析を中止します。患者の方は至急連絡してください。 (Starting from the 17th, the Iwaki Joban Hospital, the Iwaki Urology Clinique, the Takebayashi Sadakichi Memorial Clinique and the Izumi Central Clinique have all suspended dialysis sessions. Patients are advised to urgently make contact.) Aid message: いわき泌尿器科病院で短時間透析が可能で す。受付時間は9時から16時までです。(透析の再開) (Restart of dialysis sessions: short dialysis sessions are available at the Iwaki Urology Clinique between 9 AM and 4 PM.) Problem report: ごめんなさい拡散をお願いしてもいいで すか。仙台の父親の話ですと携帯の充電がもうない人が 続出しているそうです。携帯充電器の支援が必要かと思 われます。 (Please spread this message. According to my father in Sendai, there are more and more people whose phones ran out of battery. We need phone chargers!) Aid message: 【拡散希望】仙台若林区役所で携帯電話の 充電ができるそうです。 ([Please spread] At the City Hall of Wakabayashi-ku, Sendai, you can recharge your phone battery.) Table 7: Examples from the output of the proposed method in the ‘all’ setting. Problem report and aid message nuclei are boldfaced in the English translations. TR&EX was 8,484 and 7,363, respectively. The performance of PROPOSED and PROPOSED-TR&EX were assessed in two settings: ‘all’ and ‘unseen’. For ‘all’, we selected 400 problem-aid matches from the outputs of the respective systems after applying the 10-match restriction. For ‘unseen’, first we removed the samples from the systems’ outputs if either the nucleus noun or template pair appear in the nuclei of the problem-aid match recognizers’ training data. Next we applied the same sampling process as with ‘all’. Three annotators (other than the authors) manually labeled the sample sets, final judgment being made by majority vote. The Fleiss’ kappa score for all test data was 0.73 (substantial agreement). Figure 3 shows the systems’ precision curves, drawn from the samples whose X-axis positions represent the ranks according to SVM scores. In both scenarios we can confirm that excitation polarity and trouble expression related features contribute to this task. In the ‘all’ setting in terms of average precision calculated over the top 7,200 matches, PROPOSED’s 62.36% is 10.48 points higher than that of PROPOSED-TR&EX. For unseen problem/aid nuclei PROPOSED method’s average precision of 58.57% calculated at the top 3,800 matches is 5.47 points higher than that of PROPOSED-TR&EX at the same data point. The improvement in precision when using TR&EX is statistically significant in both settings (p < 0.01). 6 Related Work Twitter has been observed as a platform for situational awareness during various crisis situations (Starbird et al., 2010; Vieweg et al., 2010), as sensors for an earthquake reporting system (Sakaki et al., 2010; Okazaki and Matsuo, 2010) or to detect epidemics (Aramaki et al., 2011). Besides Twitter, blogs or forums have also been the target of community response analysis (Qu et al., 2009; Torrey et al., 2007). Similar to our work are the ones of Neubig et al. (2011) and Ishino et al. (2012), who tackle specific problems that occur during disasters (i.e., safety information and transportation information, respectively); and Munro (2011), who extracted “actionable messages” (requests and aids, indiscriminately), matching being performed manually. Our work differs from (Neubig et al., 2011) and (Ishino et al., 2012) in that we do not restrict the range of problem reports, and as opposed to (Munro, 2011), matching is automatic. Systems such as that of Seki (2011)11 or Munro (2013)12 are successful examples of crisis crowdsourcing, but these require extensive human intervention to coordinate useful information. Another category of related work relevant to our task is troubleshooting. Baldwin et al. (2007) and Raghavan et al. (2010) use discussion forums to solve technical problems using supervised learning methods, but these approaches presume that the solution of a specific problem is within the same thread. In our work we do not employ structural characteristics of tweets as restrictions (e.g., a problem report and its aid message need to be in the same tweet chain). 7 Conclusions In this paper, we proposed a method to discover matches between problem reports and aid messages from tweets in large-scale disasters. Through a series of experiments, we demonstrated that the performance of the problem-aid matching can be improved with the usage of semantic orientation of excitation polarities, proposed in (Hashimoto et al., 2012), and trouble expressions. We are planning to deploy our system and release model files of the classifiers to assist relief efforts in future crisis scenarios. 11http://www.sinsai.info/ 12http://www.mission4636.org/ 1627 References Adam Acar and Yuya Muraki. 2011. Twitter for crisis communication: Lessons learned from Japan’s tsunami disaster. International Journal of Web Based Communities, 7(3):392–402. Eiji Aramaki, Sachiko Maskawa, and Mizuki Morita. 2011. Twitter catches the flu: Detecting influenza epidemics using Twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1568–1576. Timothy Baldwin, David Martinez, and Richard B. Penman. 2007. Automatic thread classification for Linux user forum information access. In Proceedings of the 12th Australasian Document Computing Symposium (ADCS 2007), pages 72–79. Stijn De Saeger, Kentaro Torisawa, and Jun’ichi Kazama. 2008. Looking for trouble. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), pages 185– 192. Stijn De Saeger, Kentaro Torisawa, Masaaki Tsuchida, Jun’ichi Kazama, Chikara Hashimoto, Ichiro Yamada, Jong-Hoon Oh, Istv´an Varga, and Yulan Yan. 2011. Relation acquisition using word classes and partial patterns. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 825–835. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 5:378–382. Chikara Hashimoto, Kentaro Torisawa, Stijn De Saeger, Jong-Hoon Oh, and Jun’ichi Kazama. 2012. Excitatory or inhibitory: A new semantic orientation extracts contradiction and causality from the web. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012), pages 619–630. Aya Ishino, Shuhei Odawara, Hidetsugu Nanba, and Toshiyuki Takezawa. 2012. Extracting transportation information and traffic problems from tweets during a disaster: Where do you evacuate to? In Proceedings of the Second International Conference on Advances in Information Mining and Management (IMMM 2012), pages 91–96. Hiroshi Kanayama and Tetsuya Nasukawa. 2008. Textual demand analysis: Detection of users’ wants and needs from opinions. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), pages 409–416. Jun’ichi Kazama and Kentaro Torisawa. 2008. Inducing gazetteers for named entity recognition by largescale clustering of dependency relations. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT), pages 407– 415. Jun’ichi Kazama, Stijn De Saeger, Kow Kuroda, Masaki Murata, and Kentaro Torisawa. 2010. A Bayesian method for robust estimation of distributional similarities. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 247–256. Jun’ichi Kazama, Stijn De Saeger, Kentaro Torisawa, Jun Goto, and Istv´an Varga. 2013. Saigaiji jouhou e no shitsumon outo shisutemu no tekiyou no kokoromi. (An attempt for applying question-answering system on disaster related information). In Proceeding of the Nineteenth Annual Meeting of The Association for Natural Language Processing. (in Japanese). Benjamin Mandel, Aron Culotta, John Boulahanis, Danielle Stark, Bonnie Lewis, and Jeremy Rodrigue. 2012. A demographic analysis of online sentiment during Hurricane Irene. In Proceedings of the Second Workshop on Language Analysis in Social Media (LASM 2012), pages 27–36. Robert Munro. 2011. Subword and spatiotemporal models for identifying actionable information in Haitian Kreyol. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning (CoNLL-2011), pages 68–77. Robert Munro. 2013. Crowdsourcing and the crisis-affected community. Information Retrieval, 16(2):210–266. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using CRFs with hidden variables. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2010), pages 786–794. National Police Agency of Japan. 2013. Damage situation and public countermeasures associated with 2011 Tohoku district – off the Pacific Ocean Earthquake. http://www.npa.go.jp/archive/ keibi/biki/higaijokyo_e.pdf. (accessed on 30 April, 2013). Graham Neubig, Yuichiroh Matsubayashi, Masato Hagiwara, and Koji Murakami. 2011. Safety information mining ― what can NLP do in a disaster ―. In Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP 2011), pages 965–973. Jong-Hoon Oh, Kentaro Torisawa, Chikara Hashimoto, Takuya Kawada, Stijn De Saeger, Jun’ichi Kazama, and Yiou Wang. 2012. Why question answering using sentiment analysis and word classes. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL 2012), pages 368–378. 1628 Jong-Hoon Oh, Kentaro Torisawa, Chikara Hashimoto, Motoki Sano, Stijn De Saeger, and Kiyonori Ohtake. 2013. Why-question answering using intra- and inter-sentential causal relations. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013). Kiyonori Ohtake, Kentaro Torisawa, Jun Goto, and Stijn De Saeger. 2013. Saigaiji ni okeru hisaisha to kyuuen kyuujosha kan no souhoko komyunikeeshon. (Bi-directional communication between victims and rescures during a crisis). In Proceeding of the Nineteenth Annual Meeting of The Association for Natural Language Processing. (in Japanese). Makoto Okazaki and Yutaka Matsuo. 2010. Semantic Twitter: Analyzing tweets for real-time event notification. In Proceedings of the 2008/2009 international conference on Social software: Recent trends and developments in social software (BlogTalk 2008), pages 63–74. R. Lyman Ott and Michael T. Longnecker, 2010. An Introduction to Statistical Methods and Data Analysis, chapter 10.2. Brooks Cole, 6th edition. Yan Qu, Philip Fei Wu, and Xiaoqing Wang. 2009. Online community response to major disaster: A study of Tianya forum in the 2008 Sichuan Earthquake. In 42st Hawaii International International Conference on Systems Science (HICSS-42), pages 1–11. Preethi Raghavan, Rose Catherine, Shajith Ikbal, Nanda Kambhatla, and Debapriyo Majumdar. 2010. Extracting problem and resolution information from online discussion forums. In Proceedings of the 16th International Conference on Management of Data (COMAD 2010). Takeo Saijo. 2012. Hito-o tasukeru sungoi shikumi. (A stunning system that saves people). Diamond Inc. (in Japanese). Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: Real-time event detection by social sensors. In Proceedings of the 19th International Conference on World Wide Web (WWW 2010), pages 851–860. Motoki Sano, Istv´an Varga, Jun’ichi Kazama, and Kentaro Torisawa. 2012. Requests in tweets during a crisis: A systemic functional analysis of tweets on the Great East Japan Earthquake and the Fukushima Daiichi nuclear disaster. In Papers from the 39th International Systemic Functional Congress (ISFC39), pages 135–140. Haruyuki Seki. 2011. Higashi-nihon daishinsai fukkou shien platform sinsai.info no naritachi to kongo no kadai. (The organizational structure of sinsai.info restoration support platform for the 2011 Great East Japan Earthquake and future challenges). Journal of digital practices, 2(4):237–241. (in Japanese). Kate Starbird, Leysia Palen, Amanda L. Hughes, and Sarah Vieweg. 2010. Chatter on the red: What hazards threat reveals about the social life of microblogged information. In Proceedings of The 2010 ACM Conference on Computer Supported Cooperative Work (CSCW 2010), pages 241–250. Cristen Torrey, Moira Burke, Matthew L. Lee, Anind K. Dey, Susan R. Fussell, and Sara B. Kiesler. 2007. Connected giving: Ordinary people coordinating disaster relief on the Internet. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS-40), pages 179–188. Katerina Tsagkalidou, Vassiliki Koutsonikola, Athena Vakali, and Konstantinos Kafetsios. 2011. Emotional aware clustering on micro-blogging sources. In Proceedings of the 4th international conference on Affective computing and intelligent interaction (ACII 2011), pages 387–396. Sarah Vieweg, Amanda L. Hughes, Kate Starbird, and Leysia Palen. 2010. Microblogging during two natural hazards events: What Twitter may contribute to situational awareness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2010), pages 1079–1088. Patrick Winn. 2011. Japan tsunami disaster: As Japan scrambles, Twitter reigns. GlobalPost, 18 March. 1629
2013
159
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 155–165, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Distortion Model Considering Rich Context for Statistical Machine Translation Isao Goto†,‡ Masao Utiyama† Eiichiro Sumita† Akihiro Tamura† Sadao Kurohashi‡ †National Institute of Information and Communications Technology ‡Kyoto University [email protected] {mutiyama, eiichiro.sumita, akihiro.tamura}@nict.go.jp [email protected] Abstract This paper proposes new distortion models for phrase-based SMT. In decoding, a distortion model estimates the source word position to be translated next (NP) given the last translated source word position (CP). We propose a distortion model that can consider the word at the CP, a word at an NP candidate, and the context of the CP and the NP candidate simultaneously. Moreover, we propose a further improved model that considers richer context by discriminating label sequences that specify spans from the CP to NP candidates. It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data. In our experiments, our model improved 2.9 BLEU points for Japanese-English and 2.6 BLEU points for Chinese-English translation compared to the lexical reordering models. 1 Introduction Estimating appropriate word order in a target language is one of the most difficult problems for statistical machine translation (SMT). This is particularly true when translating between languages with widely different word orders. To address this problem, there has been a lot of research done into word reordering: lexical reordering model (Tillman, 2004), which is one of the distortion models, reordering constraint (Zens et al., 2004), pre-ordering (Xia and McCord, 2004), hierarchical phrase-based SMT (Chiang, 2007), and syntax-based SMT (Yamada and Knight, 2001). In general, source language syntax is useful for handling long distance word reordering. However, obtaining syntax requires a syntactic parser, which is not available for many languages. Phrase-based SMT (Koehn et al., 2007) is a widely used SMT method that does not use a parser. Phrase-based SMT mainly1 estimates word reordering using distortion models2. Therefore, distortion models are one of the most important components for phrase-based SMT. On the other hand, there are methods other than distortion models for improving word reordering for phrase-based SMT, such as pre-ordering or reordering constraints. However, these methods also use distortion models when translating by phrase-based SMT. Therefore, distortion models do not compete against these methods and are commonly used with them. If there is a good distortion model, it will improve the translation quality of phrase-based SMT and benefit to the methods using distortion models. In this paper, we propose two distortion models for phrase-based SMT. In decoding, a distortion model estimates the source word position to be translated next (NP) given the last translated source word position (CP). The proposed models are the pair model and the sequence model. The pair model utilizes the word at the CP, a word at an NP candidate site, and the words surrounding the CP and the NP candidates (context) simultaneously. In addition, the sequence model, which is the further improved model, considers richer context by identifying the label sequence that specify the span from the CP to the NP. It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data. Our model learns the preference relations among NP 1A language model also supports the estimation. 2In this paper, reordering models for phrase-based SMT, which are intended to estimate the source word position to be translated next in decoding, are called distortion models. This estimation is used to produce a hypothesis in the target language word order sequentially from left to right. 155 kinou kare wa pari de hon wo katta he bought books in Paris yesterday Source: Target: Figure 1: An example of left-to-right translation for Japanese-English. Boxes represent phrases and arrows indicate the translation order of the phrases. candidates. Our model consists of one probabilistic model and does not require a parser. Experiments confirmed the effectiveness of our method for Japanese-English and Chinese-English translation, using NTCIR-9 Patent Machine Translation Task data sets (Goto et al., 2011). 2 Distortion Model for Phrase-Based SMT A Moses-style phrase-based SMT generates target hypotheses sequentially from left to right. Therefore, the role of the distortion model is to estimate the source phrase position to be translated next whose target side phrase will be located immediately to the right of the already generated hypotheses. An example is shown in Figure 1. In Figure 1, we assume that only the kare wa (English side: “he”) has been translated. The target word to be generated next will be “bought” and the source word to be selected next will be its corresponding Japanese word katta. Thus, a distortion model should estimate phrases including katta as a source phrase position to be translated next. To explain the distortion model task in more detail, we need to redefine more precisely two terms, the current position (CP) and next position (NP) in the source sentence. CP is the source sentence position corresponding to the rightmost aligned target word in the generated target word sequence. NP is the source sentence position corresponding to the leftmost aligned target word in the target phrase to be generated next. The task of the distortion model is to estimate the NP3 from NP candidates (NPCs) for each CP in the source sentence.4 3NP is not always one position, because there may be multiple correct hypotheses. 4This definition is slightly different from that of existing methods such as Moses and (Green et al., 2010). In existing methods, CP is the rightmost position of the last translated source phrase and NP is the leftmost position of the source phrase to be translated next. Note that existing methods do kinou 1 kare 2 wa 3 pari 4 de 5 hon 6 wo 7 katta 8 he bought books in Paris yesterday (a) kinou 1 kare 2 wa 3 pari 4 de 5 ni 6 satsu 7 hon 8 wo 9 katta 10 he bought two books in Paris yesterday (b) kinou 1 kare 2 wa 3 hon 4 wo 5 karita 6 ga 7 kanojo 8 wa 9 katta 10 he borrowed books yesterday but she bought (c) kinou 1 kare 2 wa 3 kanojo 4 ga 5 katta 6 hon 7 wo 8 karita 9 yesterday he borrowed the books that she bought (e) kinou 1 kare 2 wa 3 hon 4 wo 5 katta 6 ga 7 kanojo 8 wa 9 karita 10 he bought books yesterday but she borrowed (d) › ~ › › ~ › ~ › ~ CP NP Figure 2: Examples of CP and NP for JapaneseEnglish translation. The upper sentence is the source sentence and the sentence underneath is a target hypothesis for each example. The NP is in bold, and the CP is in bold italics. The point of an arrow with a × mark indicates a wrong NP candidate. Estimating NP is a difficult task. Figure 2 shows some examples. The superscript numbers indicate the word position in the source sentence. In Figure 2 (a), the NP is 8. However, in Figure 2 (b), the word (kare) at the CP is the same as (a), but the NP is different (the NP is 10). From these examples, we see that distance is not the essential factor in deciding an NP. And it also turns out that the word at the CP alone is not enough to estimate the NP. Thus, not only the word at the CP but also the word at a NP candidate (NPC) should be considered simultaneously. In (c) and (d) in Figure 2, the word (kare) at the CP is the same and karita (borrowed) and katta (bought) are at the NPCs. Karita is the word at the NP and katta is not the word at the NP for (c), while katta is the word at the NP and karita is not the word at the NP for (d). From these examples, considering what the word is at the NP not consider word-level correspondences. 156 is not enough to estimate the NP. One of the reasons for this difference is the relative word order between words. Thus, considering relative word order is important. In (d) and (e) in Figure 2, the word (kare) at the CP and the word order between katta and karita are the same. However, the word at the NP for (d) and the word at the NP for (e) are different. From these examples, we can see that selecting a nearby word is not always correct. The difference is caused by the words surrounding the NPCs (context), the CP context, and the words between the CP and the NPC. Thus, these should be considered when estimating the NP. In summary, in order to estimate the NP, the following should be considered simultaneously: the word at the NP, the word at the CP, the relative word order among the NPCs, the words surrounding NP and CP (context), and the words between the CP and the NPC. There are distortion models that do not require a parser for phrase-based SMT. The linear distortion cost model used in Moses (Koehn et al., 2007), whose costs are linearly proportional to the reordering distance, always gives a high cost to long distance reordering, even if the reordering is correct. The MSD lexical reordering model (Tillman, 2004; Koehn et al., 2005; Galley and Manning, 2008) only calculates probabilities for the three kinds of phrase reorderings (monotone, swap, and discontinuous), and does not consider relative word order or words between the CP and the NPC. Thus, these models are not sufficient for long distance word reordering. Al-Onaizan and Papineni (2006) proposed a distortion model that used the word at the CP and the word at an NPC. However, their model did not use context, relative word order, or words between the CP and the NPC. Ni et al. (2009) proposed a method that adjusts the linear distortion cost using the word at the CP and its context. Their model does not simultaneously consider both the word specified at the CP and the word specified at the NPCs. Green et al. (2010) proposed distortion models that used context. Their model (the outbound model) estimates how far the NP should be from the CP using the word at the CP and its context.5 Their model does not simultaneously con5They also proposed another model (the inbound model) sider both the word specified at the CP and the word specified at an NPC. For example, the outbound model considers the word specified at the CP, but does not consider the word specified at an NPC. Their models also do not consider relative word order. In contrast, our distortion model solves the aforementioned problems. Our distortion models utilize the word specified at the CP, the word specified at an NPC, and also the context of the CP and the NPC simultaneously. Furthermore, our sequence model considers richer context including the relative word order among NPCs and also including all the words between the CP and the NPC. In addition, unlike previous methods, our models learn the preference relations among NPCs. 3 Proposed Method In this section, we first define our distortion model and explain our learning strategy. Then, we describe two proposed models: the pair model and the sequence model that is the further improved model. 3.1 Distortion Model and Learning Strategy First, we define our distortion model. Let i be a CP, j be an NPC, S be a source sentence, and X be the random variable of the NP. In this paper, distortion probability is defined as P(X = j|i, S), which is the probability of an NPC j being the NP. Our distortion model is defined as the model calculating the distortion probability. Next, we explain the learning strategy for our distortion model. We train this model as a discriminative model that discriminates the NP from NPCs. Let J be a set of word positions in S other than i. We train the distortion model subject to ∑ j∈J P(X = j|i, S) = 1. The model parameters are learned to maximize the distortion probability of the NP among all of the NPCs J in each source sentence. This learning strategy is a kind of preference relation learning (Evgniou and Pontil, 2002). In this learning, the that estimates reverse direction distance. Each NPC is regarded as an NP, and the inbound model estimates how far the corresponding CP should be from the NP using the word at the NP and its context. 157 distortion probability of the actual NP will be relatively higher than those of all the other NPCs J. This learning strategy is different from that of (Al-Onaizan and Papineni, 2006; Green et al., 2010). For example, Green et al. (2010) trained their outbound model subject to ∑ c∈C P(Y = c|i, S) = 1, where C is the set of the nine distortion classes6 and Y is the random variable of the correct distortion class that the correct distortion is classified into. Distortion is defined as j −i −1. Namely, the model probabilities that they learned were the probabilities of distortion classes in all of the training data, not the relative preferences among the NPCs in each source sentence. 3.2 Pair Model The pair model utilizes the word at the CP, the word at an NPC, and the context of the CP and the NPC simultaneously to estimate the NP. This can be done by our distortion model definition and the learning strategy described in the previous section. In this work, we use the maximum entropy method (Berger et al., 1996) as a discriminative machine learning method. The reason for this is that a model based on the maximum entropy method can calculate probabilities. However, if we use scores as an approximation of the distortion probabilities, various discriminative machine learning methods can be applied to build the distortion model. Let s be a source word and sn 1 = s1s2...sn be a source sentence. We add a beginning of sentence (BOS) marker to the head of the source sentence and an end of sentence (EOS) marker to the end, so the source sentence S is expressed as sn+1 0 (s0 = BOS, sn+1 = EOS). Our distortion model calculates the distortion probability for an NPC j ∈{j|1 ≤j ≤n + 1 ∧j ̸= i} for each CP i ∈{i|0 ≤i ≤n} P(X = j|i, S) = 1 Zi exp ( wTf (i, j, S, o, d) ) (1) where o = { 0 (i < j) 1 (i > j) , d =      0 (|j −i| = 1) 1 (2 ≤|j −i| ≤5) 2 (6 ≤|j −i|) , 6(−∞, −8], [−7, −5], [−4, −3], −2, 0, 1, [2, 3], [4, 6], and [7, ∞). In (Green et al., 2010), −1 was used as one of distortion classes. However, −1 represents the CP in our definition, and CP is not an NPC. Thus, we shifted all of the distortion classes for negative distortions by −1. Template ⟨o⟩, ⟨o, sp⟩1, ⟨o, ti⟩, ⟨o, tj⟩, ⟨o, d⟩, ⟨o, sp, sq⟩2, ⟨o, ti, tj⟩, ⟨o, ti−1, ti, tj⟩, ⟨o, ti, ti+1, tj⟩, ⟨o, ti, tj−1, tj⟩, ⟨o, ti, tj, tj+1⟩, ⟨o, si, ti, tj⟩, ⟨o, sj, ti, tj⟩ 1 p ∈{p|i −2 ≤p ≤i + 2 ∨j −2 ≤p ≤j + 2} 2 (p, q) ∈{(p, q)|i −2 ≤p ≤i + 2 ∧j −2 ≤q ≤ j + 2 ∧(|p −i| ≤1 ∨|q −j| ≤1)} Table 1: Feature templates. t is the part of speech of s. w is a weight parameter vector, each element of f(·) is a binary feature function, and Zi = ∑ j∈{j|1≤j≤n+1 ∧j̸=i}(numerator of Equation 1) is a normalization factor. o is an orientation of i to j and d is a distance class. The binary feature function that constitutes an element of f(·) returns 1 when its feature is matched and if else, returns 0. Table 1 shows the feature templates used to produce the features. A feature is an instance of a feature template. In Equation 1, i, j, and S are used by the feature functions. Thus, Equation 1 can utilize features consisting of both si, which is the word specified at i, and sj, which is the word specified at j, or both the context of i and the context of j simultaneously. Distance is considered using the distance class d. Distortion is represented by distance and orientation. The pair model considers distortion using six joint classes of d and o. 3.3 Sequence Model The pair model does not consider relative word order among NPCs or all the words between the CP and an NPC. In this section, we propose a further improved model, the sequence model, which considers richer context including relative word order among NPCs and also including all the words between the CP and an NPC. In (c) and (d) in Figure 2, karita (borrowed) and katta (bought) occur in the source sentences. The pair model considers the effect of distances using only the distance class d. If these positions are in the same distance class, the pair model cannot consider the differences in distances. In this case, these are conflict instances during training and it is difficult to distinguish the NP for translation. Now to explain how to consider the relative word order by the sequence model. The sequence model considers the relative word order by discriminating the label sequence corresponding to the NP from the label sequences corresponding to 158 Label Description C A position is the CP. I A position is a position between the CP and the NPC. N A position is the NPC. Table 2: The “C, I, and N” label set. Label sequence ID 1 N C 3 C N 4 C I N 5 C I I N 6 C I I I N 7 C I I I I N 8 C I I I I I N 9 C I I I I I I N 10 C I I I I I I I N 11 C I I I I I I I I N BOS0 kinou1 kare2 wa3 hon4 wo5 karita6 ga7 kanojo8 wa9 katta10 EOS11 (yesterday) (he) (book) (borrowed) (she) (bought) Source sentence Figure 3: Example of label sequences that specify spans from the CP to each NPC for the case of Figure 2 (c). The labels (C, I, and N) in the boxes are the label sequences. each NPC in each sentence. Each label sequence corresponds to one NPC. Therefore, if we identify the label sequence that corresponds to the NP, we can obtain the NP. The label sequences specify the spans from the CP to each NPC using three kinds of labels indicating the type of word positions in the spans. The three kinds of labels, “C, I, and N,” are shown in Table 2. Figure 3 shows examples of the label sequences for the case of Figure 2 (c). In Figure 3, the label sequences are represented by boxes and the elements of the sequences are labels. The NPC is used as the label sequence ID for each label sequence. The label sequence can treat relative word order. For example, the label sequence ID of 10 in Figure 3 knows that karita exists to the left of the NPC of 10. This is because karita6 carries a label I while katta10 carries a label N, and a position with label I is defined as relatively closer to the CP than a position with label N. By utilizing the label sequence and corresponding words, the model can reflect the effect of karita existing between the CP and the NPC of 10 on the probability. For the sequence model, karita (borrowed) and katta (bought) in (c) and (d) in Figure 2 are not conflict instances in training, whereas they are conflict instances in training for the pair model. The reason is as follows. In order to make the probability of the NPC of 10 smaller than the NPC of 6, instead of making the weight parameters for the features with respect to the word at the position of 10 with label N smaller than the weight parameters for the features with respect to the word at the position of 6 with label N, the sequence model can give negative weight parameters for the features with respect to the word at the position of 6 with label I. We use a sequence discrimination technique based on CRF (Lafferty et al., 2001) to identify the label sequence that corresponds to the NP. There are two differences between our task and the CRF task. One difference is that CRF discriminates label sequences that consist of labels from all of the label candidates, whereas we constrain the label sequences to sequences where the label at the CP is C, the label at an NPC is N, and the labels between the CP and the NPC are I. The other difference is that CRF is designed for discriminating label sequences corresponding to the same object sequence, whereas we do not assign labels to words outside the spans from the CP to each NPC. However, when we assume that another label such as E has been assigned to the words outside the spans and there are no features involving label E, CRF with our label constraints can be applied to our task. In this paper, the method designed to discriminate label sequences corresponding to the different word sequence lengths is called partial CRF. The sequence model based on partial CRF is derived by extending the pair model. We introduce the label l and extend the pair model to discriminating the label sequences. There are two extensions to the pair model. One extension uses labels. We suppose that label sequences specify the spans from the CP to each NPC. We conjoined all the feature templates in Table 1 with an additional feature template ⟨li, lj⟩to include the labels into features where li is the label corresponding to the position of i. The other extension uses sequence. In the pair model, the position pair of (i, j) is used to derive features. In contrast, to descriminate label sequences in the sequence model, the position pairs of (i, k), k ∈{k|i < k ≤j ∨j ≤k < i} 159 and (k, j), k ∈{k|i ≤k < j ∨j < k ≤i} are used to derive features. Note that in the feature templates in Table 1, i and j are used to specify two positions. When features are used for the sequence model, one of the positions is regarded as k. The distortion probability for an NPC j being the NP given a CP i and a source sentence S is calculated as: P(X = j|i, S) = 1 Zi exp ( ∑ k∈M∪{j} wTf (i, k, S, o, d, li, lk) + ∑ k∈M∪{i} wTf (k, j, S, o, d, lk, lj) ) (2) where M = { {m|i < m < j} (i < j) {m|j < m < i} (i > j) and Zi = ∑ j∈{j|1≤j≤n+1 ∧j̸=i}(numerator of Equation 2) is a normalization factor. Since j is used as the label sequence ID, discriminating j also means discriminating label sequence IDs. The first term in exp(·) in Equation 2 considers all of the word pairs located at i and other positions in the sequence, and also their context. The second term in exp(·) in Equation 2 considers all of the word pairs located at j and other positions in the sequence, and also their context. By designing our model to discriminate among different length label sequences, our model can naturally handle the effect of distances. Many features are derived from a long label sequence because it will contain many labels between the CP and the NPC. On the other hand, fewer features are derived from a short label sequence because a short label sequence will contain fewer labels between the CP and the NPC. The bias from these differences provides important clues for learning the effect of distances.7 7Note that the sequence model does not only consider larger context than the pair model, but that it also considers labels. The pair model does not discriminate labels, whereas the sequence model uses label N and label I for the positions except for the CP, depending on each situation. For example, in Figure 3, at position 6, label N is used in the label sequence ID of 6, but label I is used in the label sequence IDs of 7 to 11. Namely, even if they are at the same position, the labels in the label sequences are different. The sequence model discriminates the label differences. BOS kare wa pari de hon wo katta EOS BOS he bought books in Paris EOS Source: Target: training data Figure 4: Examples of supervised training data. The lines represent word alignments. The English side arrows point to the nearest word aligned on the right. 3.4 Training Data for Discriminative Distortion Model To train our discriminative distortion model, supervised training data is needed. The training data is built from a parallel corpus and word alignments between corresponding source words and target words. Figure 4 shows examples of training data. We select the target words aligned to the source words sequentially from left to right (target side arrows). Then, the order of the source words in the target word order is decided (source side arrows). The source sentence and the source side arrows are the training data. 4 Experiment In order to confirm the effects of our distortion model, we conducted a series of Japanese to English (JE) and Chinese to English (CE) translation experiments.8 4.1 Common Settings We used the patent data for the Japanese to English and Chinese to English translation subtasks from the NTCIR-9 Patent Machine Translation Task (Goto et al., 2011). There were 2,000 sentences for the test data and 2,000 sentences for the development data. Mecab9 was used for the Japanese morphological analysis. The Stanford segmenter10 and tagger11 were used for Chinese segmentation and POS tagging. The translation model was trained using sentences of 40 words or less from the training data. So approximately 2.05 million sentence pairs consisting of approximately 54 million 8We conducted JE and CE translation as examples of language pairs with different word orders and of languages where there is a great need for translation into English. 9http://mecab.sourceforge.net/ 10http://nlp.stanford.edu/software/segmenter.shtml 11http://nlp.stanford.edu/software/tagger.shtml 160 Japanese tokens whose lexicon size was 134k and 50 million English tokens whose lexicon size was 213k were used for JE. And approximately 0.49 million sentence pairs consisting of 14.9 million Chinese tokens whose lexicon size was 169k and 16.3 million English tokens whose lexicon size was 240k were used for CE. GIZA++ and growdiag-final-and heuristics were used to obtain word alignments. In order to reduce word alignment errors, we removed articles {a, an, the} in English and particles {ga, wo, wa} in Japanese before performing word alignments because these function words do not correspond to any words in the other languages. After word alignment, we restored the removed words and shifted the word alignment positions to the original word positions. We used 5gram language models that were trained using the English side of each set of bilingual training data. We used an in-house standard phrase-based SMT system compatible with the Moses decoder (Koehn et al., 2007). The SMT weighting parameters were tuned by MERT (Och, 2003) using the development data. To stabilize the MERT results, we tuned three times by MERT using the first half of the development data and we selected the SMT weighting parameter set that performed the best on the second half of the development data based on the BLEU scores from the three SMT weighting parameter sets. We compared systems that used a common SMT feature set from standard SMT features and different distortion model features. The common SMT feature set consists of: four translation model features, phrase penalty, word penalty, and a language model feature. The compared different distortion model features are: the linear distortion cost model feature (LINEAR), the linear distortion cost model feature and the six MSD bidirectional lexical distortion model (Koehn et al., 2005) features (LINEAR+LEX), the outbound and inbound distortion model features discriminating nine distortion classes (Green et al., 2010) (9-CLASS), the proposed pair model feature (PAIR), and the proposed sequence model feature (SEQUENCE). 4.2 Training for the Proposed Models Our distortion model was trained as follows: We used 0.2 million sentence pairs and their word alignments from the data used to build the translation model as the training data for our distortion models. The features that were selected and used were the ones that had been counted12, using the feature templates in Table 1, at least four times for all of the (i, j) position pairs in the training sentences. We conjoined the features with three types of label pairs ⟨C, I⟩, ⟨I, N⟩, or ⟨C, N⟩as instances of the feature template ⟨li, lj⟩to produce features for SEQUENCE. The L-BFGS method (Liu and Nocedal, 1989) was used to estimate the weight parameters of maximum entropy models. The Gaussian prior (Chen and Rosenfeld, 1999) was used for smoothing. 4.3 Training for the Compared Models For 9-CLASS, we used the same training data as for our distortion models. Let ti be the part of speech of si. We used the following feature templates to produce features for the outbound model: ⟨si−2⟩, ⟨si−1⟩, ⟨si⟩, ⟨si+1⟩, ⟨si+2⟩, ⟨ti⟩, ⟨ti−1, ti⟩, ⟨ti, ti+1⟩, and ⟨si, ti⟩. These feature templates correspond to the components of the feature templates of our distortion models. In addition to these features, we used a feature consisting of the relative source sentence position as the feature used by (Green et al., 2010). The relative source sentence position is discretized into five bins, one for each quintile of the sentence. For the inbound model13, i of the feature templates was changed to j. Features occurring four or more times in the training sentences were used. The maximum entropy method with Gaussian prior smoothing was used to estimate the model parameters. The MSD bidirectional lexical distortion model was built using all of the data used to build the translation model. 4.4 Results and Discussion We evaluated translation quality based on the caseinsensitive automatic evaluation score BLEU-4 (Papineni et al., 2002). We used distortion limits of 10, 20, 30, and unlimited (∞), which limited the number of words for word reordering to a maximum number. Table 3 presents our main results. The proposed SEQUENCE outperformed the baselines for both Japanese to English and Chinese to English translation. This demonstrates the effectiveness of the proposed SEQUENCE. The scores of the proposed SEQUENCE were higher than those 12When we counted features for selection, we only counted features that were from the feature templates of ⟨si, sj⟩, ⟨ti, tj⟩, ⟨si, ti, tj⟩, and ⟨sj, ti, tj⟩in Table 1 when j was not the NP, in order to avoid increasing the number of features. 13The inbound model is explained in footnote 5. 161 Japanese-English Chinese-English Distortion limit 10 20 30 ∞ 10 20 30 ∞ LINEAR 27.98 27.74 27.75 27.30 29.18 28.74 28.31 28.33 LINEAR+LEX 30.25 30.37 30.17 29.98 30.81 30.24 30.16 30.13 9-CLASS 30.74 30.98 30.92 30.75 31.80 31.56 31.31 30.84 PAIR 31.62 32.36 31.96 32.03 32.51 32.30 32.25 32.32 SEQUENCE 32.02 32.96 33.29 32.81 33.41 33.44 33.35 33.41 Table 3: Evaluation results for each method. The values are case-insensitive BLEU scores. Bold numbers indicate no significant difference from the best result in each language pair using the bootstrap resampling test at a significance level α = 0.01 (Koehn, 2004). Japanese-English Chinese-English HIER 30.47 32.66 Table 4: Evaluation results for hierarchical phrasebased SMT. of the proposed PAIR. This confirms the effectiveness for considering relative word order and words between the CP and an NPC. The proposed PAIR outperformed 9-CLASS, confirming that considering both the word specified at the CP and the word specified at the NPC simultaneously was more effective than that of 9-CLASS. For translating between languages with widely different word orders such as Japanese and English, a small distortion limit is undesirable because there are cases where correct translations cannot be produced with a small distortion limit, since the distortion limit prunes the search space that does not meet the constraint. Therefore, a large distortion limit is required to translate correctly. For JE translation, our SEQUENCE achieved significantly better results at distortion limits of 20 and 30 than that at a distortion limit of 10, while the baseline systems of LINEAR, LINEAR+LEX, and 9-CLASS did not achieve this. This indicate that SEQUENCE could treat long distance reordering candidates more appropriately than the compared methods. We also tested hierarchical phrase-based SMT (Chiang, 2007) (HIER) using the Moses implementation. The common data was used to train HIER. We used unlimited max-chart-span for the system setting. Results are given in Table 4. Our SEQUENCE outperformed HIER. The gain for JE was large but the gain for CE was modest. Since phrase-based SMT is generally faster in decoding speed than hierarchical phrase-based SMT, achieving better or comparable scores is worthDistortion Probability Figure 5: Average probabilities for large distortion for Japanese-English translation. while. To investigate the tolerance for sparsity of the training data, we reduced the training data for the sequence model to 20,000 sentences for JE translation.14 SEQUENCE using this model with a distortion limit of 30 achieved a BLEU score of 32.22.15 Although the score is lower than the score of SEQUENCE with a distortion limit of 30 in Table 3, the score was still higher than those of LINEAR, LINEAR+LEX, and 9-CLASS for JE in Table 3. This indicates that the sequence model also works even when the training data is not large. This is because the sequence model considers not only the word at the CP and the word at an NPC but also rich context, and rich context would be effective even for a smaller set of training data. 14We did not conduct experiments using larger training data because there would have been a very high computational cost to build models using the L-BFGS method. 15To avoid effects from differences in the SMT weighting parameters, we used the same SMT weighting parameters for SEQUENCE, with a distortion limit of 30, in Table 3. 162 To investigate how well SEQUENCE learns the effect of distance, we checked the average distortion probabilities for large distortions of j −i −1. Figure 5 shows three kinds of probabilities for distortions from 3 to 20 for Japanese-English translation. One is the average distortion probabilities in the Japanese test sentences for each distortion for SEQUENCE, and another is this for PAIR. The third (CORPUS) is the probabilities for the actual distortions in the training data that were obtained from the word alignments used to build the translation model. The probability for a distortion for CORPUS was calculated by the number of the distortion divided by the total number of distortions in the training data. Figure 5 shows that when a distance class feature used in the model was the same (e.g., distortions from 5 to 20 were the same distance class feature), PAIR produced average distortion probabilities that were almost the same. In contrast, the average distortion probabilities for SEQUENCE decreased when the lengths of the distortions increased, even if the distance class feature was the same, and this behavior was the same as that of CORPUS. This confirms that the proposed SEQUENCE could learn the effect of distances appropriately from the training data.16 5 Related Works We discuss related works other than discussed in Section 2. Xiong et al. (2012) proposed a model predicting the orientation of an argument with respect to its verb using a parser. Syntactic structures and predicate-argument structures are useful for reordering. However, orientations do not handle distances. Thus, our distortion model does not compete against the methods predicting orientations using a parser and would assist them if used 16We also checked the average distortion probabilities for the 9-CLASS outbound model in the Japanese test sentences for Japanese-English translation. We averaged the average probabilities for distortions in a distortion span of [4, 6] and also averaged those in a distortion span of [7, 20], where the distortions in each span are in the same distortion class. The average probability for [4, 6] was 0.058 and that for [7, 20] was 0.165. From CORPUS, the average probabilities in the training data for each distortion in [4, 6] were higher than those for each distortion in [7, 20]. However, the converse was true for the comparison between the two average probabilities for the outbound model. This is because the sum of probabilities for distortions from 7 and above was larger than the sum of probabilities for distortions from 4 to 6 in the training data. This comparison indicates that the 9-CLASS outbound model could not appropriately learn the effects of large distances for JE translation. together. There are word reordering constraint methods using ITG (Wu, 1997) for phrase-based SMT (Zens et al., 2004; Yamamoto et al., 2008; Feng et al., 2010). These methods consider sentence level consistency with respect to ITG. The ITG constraint does not consider distances of reordering and was used with other distortion models. Our distortion model does not consider sentence level consistency, so our distortion model and ITG constraint methods are thought to be complementary. There are tree-based SMT methods (Chiang, 2007; Galley et al., 2004; Liu et al., 2006). In many cases, tree-based SMT methods do not use the distortion models that consider reordering distance apart from translation rules because it is not trivial to use distortion scores considering the distances for decoders that do not generate hypotheses from left to right. If it could be applied to these methods, our distortion model might contribute to tree-based SMT methods. Investigating the effects will be for future work. 6 Conclusion This paper described our distortion models for phrase-based SMT. Our sequence model simply consists of only one probabilistic model, but it can consider rich context. Experiments indicate that our models achieved better performance and the sequence model could learn the effect of distances appropriately. Since our models do not require a parser, they can be applied to many languages. Future work includes application to other language pairs, incorporation into ITG constraint methods and other reordering methods, and application to tree-based SMT methods. References Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion models for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 529–536, Sydney, Australia, July. Association for Computational Linguistics. Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Comput. Linguist., 22(1):39–71, March. Stanley F. Chen and Ronald Rosenfeld. 1999. A gaussian prior for smoothing maximum entropy models. Technical report. 163 David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Theodoros Evgniou and Massimiliano Pontil. 2002. Learning preference relations from data. Neural Nets Lecture Notes in Computer Science, 2486:23– 32. Yang Feng, Haitao Mi, Yang Liu, and Qun Liu. 2010. An efficient shift-reduce decoding algorithm for phrased-based machine translation. In Coling 2010: Posters, pages 285–293, Beijing, China, August. Coling 2010 Organizing Committee. Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 848–856, Honolulu, Hawaii, October. Association for Computational Linguistics. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 273–280, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Isao Goto, Bin Lu, Ka Po Chow, Eiichiro Sumita, and Benjamin K. Tsou. 2011. Overview of the patent machine translation task at the NTCIR-9 workshop. In Proceedings of NTCIR-9, pages 559–578. Spence Green, Michel Galley, and Christopher D. Manning. 2010. Improved models of distortion cost for statistical machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 867–875, Los Angeles, California, June. Association for Computational Linguistics. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In Proceedings of the International Workshop on Spoken Language Translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic, June. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Association for Computational Linguistics. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of 18th International Conference on Machine Learning, pages 282–289. D.C. Liu and J. Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3):503–528. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 609–616, Sydney, Australia, July. Association for Computational Linguistics. Yizhao Ni, Craig Saunders, Sandor Szedmak, and Mahesan Niranjan. 2009. Handling phrase reorderings for machine translation. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 241–244, Suntec, Singapore, August. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Short Papers, pages 101– 104, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Fei Xia and Michael McCord. 2004. Improving a statistical mt system with automatically learned rewrite patterns. In Proceedings of Coling 2004, pages 508– 514, Geneva, Switzerland, Aug 23–Aug 27. COLING. Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Modeling the translation of predicate-argument structure for smt. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 902–911, Jeju Island, Korea, July. Association for Computational Linguistics. 164 Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. Hirofumi Yamamoto, Hideo Okuma, and Eiichiro Sumita. 2008. Imposing constraints from the source tree on ITG constraints for SMT. In Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2), pages 1–9, Columbus, Ohio, June. Association for Computational Linguistics. Richard Zens, Hermann Ney, Taro Watanabe, and Eiichiro Sumita. 2004. Reordering constraints for phrase-based statistical machine translation. In Proceedings of Coling 2004, pages 205–211, Geneva, Switzerland, Aug 23–Aug 27. COLING. 165
2013
16
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1630–1639, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations Angeliki Lazaridou University of Trento [email protected] Ivan Titov Saarland University [email protected] Caroline Sporleder Trier University [email protected] Abstract We propose a joint model for unsupervised induction of sentiment, aspect and discourse information and show that by incorporating a notion of latent discourse relations in the model, we improve the prediction accuracy for aspect and sentiment polarity on the sub-sentential level. We deviate from the traditional view of discourse, as we induce types of discourse relations and associated discourse cues relevant to the considered opinion analysis task; consequently, the induced discourse relations play the role of opinion and aspect shifters. The quantitative analysis that we conducted indicated that the integration of a discourse model increased the prediction accuracy results with respect to the discourse-agnostic approach and the qualitative analysis suggests that the induced representations encode a meaningful discourse structure. 1 Introduction With the rapid growth of the Web, it is becoming increasingly difficult to discern useful from irrelevant information, particularly in user-generated content, such as product reviews. To make it easier for the reader to separate the wheat from the chaff, it is necessary to structure the available information. In the review domain, this is done in aspectbased sentiment analysis which aims at identifying text fragments in which opinions are expressed about ratable aspects of products, such as ‘room quality’ or ‘service quality’. Such fine-grained analysis can serve as the first step in aspect-based sentiment summarization (Hu and Liu, 2004), a task with many practical applications. Aspect-based summarization is an active research area for which various techniques have been developed, both statistical (Mei et al., 2007; Titov and McDonald, 2008b) and not (Hu and Liu, 2004), and relying on different types of supervision sources, such as sentiment-annotated texts or polarity lexica (Turney and Littman, 2002). Most methods rely on local information (bag-of-words, short ngrams or elementary syntactic fragments) and do not attempt to account for more complex interactions. However, these local lexical representations by themselves are often not sufficient to infer a sentiment or aspect for a fragment of text. For instance, in the following example taken from a TripAdvisor1 review: Example 1. The room was nice but let’s not talk about the view. it is difficult to deduce on the basis of local lexical features alone that the opinion about the view is negative. The clause let’s not talk about the view could by itself be neutral or even positive given the right context (e.g., I’ve never seen such a fancy hotel room, my living room doesn’t look that cool... and let’s not talk about the view). However, the contrast relation signaled by the connective but makes it clear that the second clause has a negative polarity. The same observations can be made about transitions between aspects: changes in aspect are often clearly marked by discourse connectives. Importantly, some of these cues are not discourse connectives in the strict linguistic sense and are specific to the review domain (e.g., the phrase I would also in a review indicates that the topic is likely to be changed). In order to accurately predict sentiment and topic,2 a model needs to ac1http://www.tripadvisor.com/ 2In what follows, we use the terms aspect and topic, inter1630 count for these discourse phenomena and cannot rely solely on local lexical information. These issues have not gone unnoticed to the research community. Consequently, there has recently been an increased interest in models that leverage content and discourse structure in sentiment analysis tasks. However, discourse-level information is typically incorporated in a pipeline architecture, either in the form of sentiment polarity shifters (Polanyi and Zaenen, 2006; Nakagawa et al., 2010) that operate on the lexical level or by using discourse relations (Taboada et al., 2008; Zhou et al., 2011) that comply with discourse theories like Rhetorical Structure Theory (RST) (Mann and Thompson, 1988). Such approaches have a number of disadvantages. First, they require additional resources, such as lists of polarity shifters or discourse connectives which signal specific relations. These resources are available only for a handful of languages. Second, relying on a generic discourse analysis step that is carried out before sentiment analysis may introduce additional noise and lead to error propagation. Furthermore, these techniques will not necessarily be able to induce discourse relations informative for the sentiment analysis domain (Voll and Taboada, 2007). An alternative approach is to define a taskspecific scheme of discourse relations (Somasundaran et al., 2009). This previous work showed that task-specific discourse relations are helpful in predicting sentiment, however, in doing so they relied on gold-standard discourse annotation at test time rather than predicting it automatically or inducing it jointly with sentiment polarity. We take a different approach and induce discourse and sentiment information jointly in an unsupervised (or weakly supervised) manner. This has the advantage of not having to pre-specify a mapping from discourse cues to discourse relations; our model induces this automatically, which makes it portable to new domains and languages. Joint induction of discourse and sentiment structure also has the added benefit that the model is able to learn exactly those aspects of discourse structure that are relevant for sentiment analysis. We start with a relatively standard joint model of sentiment and topic, which can be regarded as a cross-breed between the JST model (Lin and He, 2009) and the ASUM model (Jo and Oh, 2011), changeably as well as sentiment levels and opinion polarity. both state-of-the-art techniques. This model is weakly supervised, as it relies solely on documentlevel (i.e. not aspect-specific) opinion polarity labels to induce topics and sentiment on the subsentential level. In order to test our hypothesis that discourse information is beneficial, we add a discourse modeling component. Note that in modeling discourse we do not exploit any kind of supervision. We demonstrate that the resulting model outperforms the baseline on a product review dataset (see Section 5). To the best of our knowledge, unsupervised joint induction of discourse structure, sentiment and topic information has not been considered before, particularly not in the context of the aspect-based sentiment analysis task. Importantly, our method for discourse modeling is a general method which can be integrated in virtually any LDA-style model of aspect and sentiment. 2 Modeling Discourse Structure Discourse cues typically do not directly indicate sentiment polarity (or aspect). However, they can indicate how polarity (or aspect) changes as the text unfolds. As we have seen in the examples above, changes in polarity can happen on a subsentential level, i.e., between adjacent clauses or, from a discourse-theoretic point of view, between adjacent elementary discourse units (EDUs). To model these changes we need a strong linguistic signal, for example, in the form of discourse connectives or other discourse cues. We hypothesize that these are more likely to occur at the beginning of an EDU than in the middle or at the end. This is certainly true for most of the traditional discourse relation cues (particularly connectives). Changes in polarity or aspect are often correlated with specific discourse relations, such as ‘contrast’. However, not all relations are relevant and there is no one-to-one correspondence between relations and sentiment changes.3 Furthermore, if a discourse relation signals a change, it is typically ambiguous whether this change occurs with the polarity (example 1) or the aspect (the room was nice but the breakfast was even better) or both (the room was nice but the breakfast was awful). Therefore, we do not explicitly model 3The ‘explanation’ relation, for example, can occur with a polarity change (We were upgraded to a really nice room because the hotel made a terrible blunder with our booking) but does not have to (The room was really nice because the hotel was newly renovated). 1631 Name Description AltSame different polarity, same aspect SameAlt same polarity, different aspect AltAlt different polarity and aspect Table 1: Discourse relations generic discourse relations; instead, inspired by the work of Somasundaran et al. (2008), we define three very general relations which encode how polarity and aspect change (Table 1). Note that we do not have a discourse relation SameSame since we do not expect to have strong linguistic evidence which states that an EDU contains the same sentiment information as the previous one.4 However, we assume that the sentiment and topic flow is fairly smooth in general. In other words, for two adjacent EDUs not connected by any of the above three relations, the prior probability of staying at the same topic and sentiment level is higher than picking a new topic and sentiment level (i.e. we use “sticky states” (Fox et al., 2008)). 3 Model In this section we describe our Bayesian model, first the discourse-agnostic model and then an extension needed to encode discourse information. The formal generative story is presented in Figure 1: the red fragments correspond to the discourse modeling component. In order to obtain the generative story for the discourse-agnostic model, they simply need to be ignored. 3.1 Discourse-agnostic model In our approach we make an assumption that all the words in an EDU correspond to the same topic and sentiment level. We also assume that an overall sentiment of the document is defined, this is the only supervision we use in inducing the model. Unlike some of the previous work (e.g., (Titov and McDonald, 2008a)), we do not constrain aspectspecific sentiment to be the same across the document. We describe our discourse-agnostic model by first describing the set of corpus-level and document-level parameters, and then explain how the content of each document is generated. Drawing model parameters On the corpus level, for every topic z ∈{1, . . . , K} and every sentiment polarity level y ∈{−1, 0, +1}, we start by drawing a unigram language model 4The typical connective in this situation would be and which is highly ambiguous and can signal several traditional discourse relations. from a Dirichlet prior. For example, the language model of the aspect service may indicate that the word friendly is used to express a positive opinion, whereas the word rude expresses a negative one. Similarly, for every topic z and every overall sentiment polarity ˆy, we draw a distribution ψˆy,z over opinion polarity in this topic z. Intuitively, one would expect the sentiment of an aspect to more often agree with the overall sentiment ˆy than not. This intuition is encoded in an asymmetric Dirichlet prior Dir(γ ˆy) for ψˆy,z : γ ˆy = (γˆy,1, . . . , γˆy,M), γˆy,y = β + τδy,ˆy, where δy,ˆy is the Kronecker symbol, β and τ are nonnegative scalar parameters. Using these “heavy-diagonal” priors is crucial, as this is the way to ensure that the overall sentiment level is tied to the aspectspecific sentiment level. Otherwise, sentiment levels will be specific to individual aspects (e.g., the ”+1” sentiment for one topic may correspond to a ”-1” sentiment for another one). Without this property we would not be able to encode soft constraints imposed by the discourse relations. Drawing documents On the document level, as in the standard LDA model, we choose the distribution over topics for the document from a symmetric Dirichlet prior parametrized by α, which is used to control sparsity of topic assignments. Furthermore, we draw the global sentiment ˆyd from a uniform distribution. The generation of a document is done on the EDU-by-EDU basis. In this work, we assume that EDU segmentation is provided by the preprocessing step. First, we generate the aspect zd,s for EDU s according to the distribution of topics θd. Then, we choose a sentiment level yd,s for the considered EDU from the categorical distribution ψˆyd,zd,s, conditioned on the aspect zd,s, as well as on the global sentiment of the document ˆyd. Finally, we generate the bag of words for the EDU by drawing the words from the aspect- and sentiment-specific language model. This model can be seen as a variant of a state-ofthe-art model for jointly inducing sentiment and aspect at the sentence level (Jo and Oh, 2011), or, more precisely, as its combination with the JST model (Lin and He, 2009), adapted to the specifics of our setting. Both these models have been shown to perform well on sentiment and topic prediction tasks, outperforming earlier models, such as the TSM model (Mei et al., 2007). Consequently, it can be considered as a strong baseline. 1632 3.2 Discourse-informed model In order to integrate discourse information into the discourse-agnostic model, we need to define a set of extra parameters and random variables. Drawing model parameters First, at the corpus level, we draw a distribution ˜ϕ over four discourse relations: three relations as defined in Table 1 and an additional dummy relation 4 to indicate that there is no relation between two adjacent EDUs (NoRelation). This distribution is drawn from an asymmetric Dirichlet prior parametrized by a vector of hyperparameters ν. These parameters encode the intuition that most pairs of EDUs do not exhibit a discourse relation relevant for the task (i.e. favor NoRelation), that is ν4 has a distinct and larger value than other parameters ν¯4. Every discourse relation c (including NoRelation which is treated here as SameSame) is associated with two groups of transition distributions, one governing transitions of sentiment ( ˜ψc) and another one controlling topic transitions (˜θc). The parameter ˜ψc,ys, defines a distribution over sentiment polarity for the EDU s + 1 given the sentiment for the sth EDU ys and the discourse relation c. This distribution encodes our beliefs about sentiment transitions between EDUs s and s + 1 related through c. For example, the distribution ˜ψSameAlt,+1 would assign higher probability mass to the positive sentiment polarity (+1) than to the other 2 sentiment levels (0, -1). Similarly, the parameter ˜θc,zs, defines a distribution over K aspects. These two families of transition distributions are each defined in the following way. For the distribution ˜θ, for relations that favor changing the aspect (SameAlt and AltAlt), the probability of the preferred (K-1) transitions is proportional to ωθ and for the remaining transitions it is proportional to 1. On the other hand, for the relations that favor keeping the same aspect (NoRelation and AltSame), the probability of the preferred transition is proportional to ω′ θ, whereas the probability of the (K-1) remaining transitions is again proportional to 1. For the sentiment transitions, the distribution ˜ψc,ys is defined in the analogous way (but depends on ωψ and ω′ ψ). These scalars are hand-coded and define soft constraints that discourse relations impose on the local flow of sentiment and aspects. The parameter ˜φc is a language model over discourse cues ˜w, which are not restricted to unigrams but can generate phrases of arbitrary (and variable) size. For this reason, we draw them from a Dirichlet process (DP) (i.e. one for each discourse relation, except for NoRelation). The base measure G0 provides the probability of an nword sequence calculated with the bigram probability model estimated from the corpus.5 This model component bears strong similarities to the Bayesian model of word segmentation (Goldwater et al., 2009), though we use the DP process to generate only the prefix of the EDU, whereas the rest of the EDU is generated from the bag-ofwords model. Drawing documents As pointed out above, the content generation is broken into two steps, where first we draw the discourse cue ˜wd,s from ˜φc and then we generate the remaining words. The second difference at the data generation step (Figure 1) is in the way the aspect and sentiment labels are drawn. As the discourse relation between the EDUs has already been chosen, we have some expectations about the values of the sentiment and aspect of the following EDU, which are encoded by the distributions ˜ψ and ˜θ. These are only soft constraints that have to be taken into consideration along with the information provided by the aspect-sentiment model. This coupling of information naturally translates into the productof-experts (PoE) (Hinton, 1999) approach, where two sources of information jointly contribute to the final result. The PoE model seems to be more appropriate here than a mixture model, as we do not want the discourse transition to overpower the sentiment-topic model. In the PoE model, in order for an outcome to be chosen, it needs to have a non-negligible probability under both models. 4 Inference Since exact inference of our model is intractable, we use collapsed Gibbs sampling. The variables that need to be inferred are the topic assignments z, the sentiment assignments y, the discourse relations c and the discourse cue ˜w (or, more precisely, its length) and are all sampled jointly (for each EDU) since we expect them to be highly dependent. All other variables (i.e. unknown distributions) could be marginalized out to obtain a collapsed Gibbs sampler (Griffiths and Steyvers, 2004). 5This measure is improper but it serves the purpose of favoring long cues, the behavior arguably desirable for our application. 1633 Global parameters: ˜ϕ ∼Dir(ν) [distrib of disc rel] for each discourse relation c = 1, .., 4: ˜φc ∼DP(η, Go) [distrib of disc rel specific disc cues] ˜θc,k - fixed [distrib of rel specific aspect transitions] ˜φc,y - fixed [distrib of rel specific sent transitions] for each aspect k = 1, 2...K: for each sentiment y = −1, 0, +1: φk,y ∼Dir(λk) [unigram language models] for each global sentiment ˆy = −1, 0, +1: ψˆy,k ∼Dir(γ) [sent distrib given overall sentiment] Data Generation: for each document d: ˆyd ∼Unif(−1, 0, +1) [global sentiment] θd ∼Dir(α) [distr over aspects] for every EDU s: cd,s ∼˜ϕ [draw disc relation] if cd,s ̸= NoRelation ˜ wd,s ∼˜φcd,s [draw disc cue] zd,s ∼θd ∗˜θcd,s, zd,s−1 [draw aspect] yd,s ∼ψˆyd,zd,s∗˜ ψcd,s,yd,s−1 [draw sentiment level] for each word after disc cue: wd,s ∼φzd,s,yd,s [draw words] Figure 1: The generative story for the joint model. The components responsible for modeling discourse information are emphasized in red: when dropped, one is left with the discourse-agnostic model. Unfortunately, the use of the PoE model prevents us from marginalizing the parameters exactly. Instead, as in Naseem et al. (2009), we resort to an approximation. We assume that zd,s and yd,s are drawn twice; once from the document specific distribution and once from the discourse transition distributions. Under this simplification, we can easily derive the conditional probabilities for the collapsed Gibbs sampling. 5 Experiments To the best of our knowledge, this is the first work that aims at evaluating directly the joint information of the sentiment and aspect assignment at the sub-sentential level of full reviews; most existing studies either focus on indirect evaluation of the produced models (e.g., classifying the overall sentiment of sentences (Titov and McDonald, 2008a; Brody and Elhadad, 2010) or even reviews (Nakagawa et al., 2010; Jo and Oh, 2011)) or evaluated solely at the sentential or even document level. Consequently, in order to evaluate our methods, we created a new dataset which will be publicly released. Aspects Frequency service 246 value 55 location 121 rooms 316 sleep quality 56 cleanliness 59 amenities 180 food 81 recommendation 121 rest 306 Total 1541 Table 2: Distribution of aspects in the data. Dataset and Annotation The dataset we created consists of 13559 hotel reviews from TripAdvisor.com.6 Since our modeling is performed on the EDU level, all sentences where segmented using the SLSEG software package.7 As a result, our dataset consists of 322,935 EDUs. For creating the gold standard, 9 annotators annotated a random subset of our dataset (65 reviews, 1541 EDUs). The annotators were presented with the whole review partitioned in EDUs and were asked to annotate every EDU with the aspect and sentiment (i.e. +1, 0 or −1) it expresses. Table 2 presents the distribution of aspects in the dataset. The distribution of the sentiments is uniform. The label rest captures cases where EDUs do not refer to any aspect or to a very rare aspect. The inter-annotator agreement (IAA), as measured in terms of Cohen’s kappa score, was 66% for the aspect labeling, 70% for the sentiment annotation and 61% for the joint task of sentiment and aspect annotation. Though these scores may not seem very high, they are similar to the ones reported in related sentiment annotation efforts (see e.g., Ganu et al. (2009)). Experimental setup In order to quantitatively evaluate the model predictions, we run two sets of experiments. In the first, we treat the task as an unsupervised classification problem and evaluate the output of the models directly against the gold standard annotation. This is a very challenging set-up, as the model has no prior information about the aspects defined (Table 2). In the second set of experiments, we show that aspects and sentiments induced by our model can be used to construct informative features for supervised classification. In 6Downloadable from http://clic.cimec. unitn.it/˜angeliki.lazaridou/datasets/ ACL2013Sentiment.tar.gz 7www.sfu.ca/˜mtaboada/research/SLSeg. html 1634 Model Precision Recall F1 Random 3.9 3.8 3.8 SentAsp 15.0 10.2 9.2 Discourse 16.5 13.8 10.8 Table 3: Results in terms of macro-averaged precision, recall and F1. Model Unmarked Marked SentAsp 9.2 5.4 Discourse 9.3 11.5 Table 4: Separate evaluation (F1) of the “marked” and the “unmarked” EDUs. all the cases, we compare the discourse-agnostic and the discourse-informed models. In order to induce the model, we let the sampler run for 2000 iterations. We use the last sample to define the labeling. The number of topics K was set to 10 in order to match the number of aspects defined in our annotation scheme (see Table 2). The hyperpriors were chosen in a qualitative experiment over a subset of our dataset by manually inspecting the produced languages models. The resulting values are: α = 10−3, β = 5 ∗10−4, τ = 5 ∗10−4, η = 10−3, ν4 = 103, ν¯4 = 10−4, ωθ = 85 and ω′ θ = ωψ = ω′ ψ = 5. 5.1 Direct clustering evaluation Our labels encoding aspect and sentiment level can be regarded as clusters. Consequently we can apply techniques developed in the context of clustering evaluation. We use a version of the standard metrics considered for the word sense induction task (Agirre and Soroa, 2007) where a clustering is converted to a classification problem. This is achieved by splitting the gold standard into two subsets; the training portion is used to choose oneto-one correspondence from the gold classes to the induced clusters and then the chosen mapping is applied to the testing portion. We perform 10-fold cross validation and report precision, recall and F1 score. Our dataset is very skewed and the majority class (rest) is arguably the least important, so we use macro-averaging over labels and then average those across folds to arrive to the reported numbers. We compare the discourse-informed model (Discourse) against two baselines; the discourseagnostic SentAsp model and Random which assigns a random label to an EDU while respecting the distribution of labels in the training set. Table 3 presents the first analysis conducted on the full set of EDUs. We observe that by incorporating latent discourse relation we improve perContent Aspect Polarity 1 but certainly off its greatness value neg 2 and while small they are nice rooms pos 3 but it is not free for all guests amenities neg 4 and the water was brown clean neg 5 and no tea making facilities rooms neg 6 when i checked out service pos 7 and if you do not service neg 8 when we got home clean neu Table 5: Examples of EDUs where local information is not sufficiently informative. formance over the discourse-agnostic model SentAsp (statistically significant according to paired ttest with p < 0.01). Note that fairly low scores in this evaluation setting are expected for any unsupervised model of sentiment and topics, as models are unsupervised both in the aspect-specific sentiment and in topic labels and the total number of labels is 28 (all aspects can be associated with the 3 sentiment levels except for rest which can only be used with neutral (0) sentiment). Consequently, induced topics, though informative (as we confirm in Section 5.3), may not correspond to the topics defined in the gold standard. For example, one well-known property of LDA-style topic models is their tendency to induce topics which account for similar fraction of words in the dataset (Jagarlamudi et al., 2012), thus, over-splitting ‘heavy’ topics (e.g. rooms in our case). The same, though to lesser degree, is true for sentiment levels where the border between neutral and positive (or negative) is also vaguely defined. To gain insight into our model, we conducted an experiment similar to the one presented in Somasundaran et al. (2009). We divide the dataset in two subsets; one containing all EDUs starting with a discourse cue (“marked”) and one containing the remaining EDUs (“unmarked”). We hypothesize that the effect of the discourse-aware model should be stronger on the first subset, since the presence of the connective indicates the possibility of a discourse relation with the previous EDU. The set of discourse connectives is taken from the Penn Discourse Treebank (Prasad et al., 2008), thus creating a list of 240 potential connectives. Table 5 presents a subset of “marked” EDUs for which trying to assign the sentiment and aspect out of context (i.e. without the previous EDU) is a difficult task. In examples 1-3 there is no explicit mention of the aspect. However, there is an anaphoric expression (marked in bold) which 1635 refers to a mention of the aspect in some previous EDU. On the other hand, in examples 4 and 5 there is an ambiguity in the choice of aspect; in example 5, tea making facilities can refer to a breakfast at the hotel (label food) or to facilities in the room (label rooms). Finally, examples 6-8 are too short and not informative at all which indicates that the segmentation tool does not always predict a desirable segmentation. Consequently, automatic induction of segmentation may be a better option. Table 4 presents quantitative results of this analysis. Although the performance over the “unmarked” example is the same for the two models, this is not the case for the “marked” instances where the discourse-informed model leverages the discourse signal and achieves better performance. This behavior agrees with our initial hypothesis, and suggests that our discourse representation, though application-specific, relies in part on the information encoded in linguistically-defined discourse cues. We will confirm this intuition in the qualitative evaluation section. The increase for the “marked” EDUs does not translate into greater differences for the overall scores (Table 3) as marked relations are considerably less frequent than unmarked ones in our gold standard (i.e. 35% of the EDUs are “marked”). Nevertheless, this clearly suggests that the discourse-informed model is in fact capable of exploiting discourse signal. 5.2 Qualitative analysis To investigate the quality of the induced discourse structure, we present the most frequent discourse cues extracted for every discourse relation. Table 6 presents a selection of cues that best explain the discourse relation they have been associated with. A general observation is that among the cues there are not only “traditional” discourse connectives like even though, although, and, but also cues that are discriminative for the specific application. In relation SameAlt we can mostly observe phrases that tend to introduce a new aspect, since an explicit mention of it is provided (e.g the location is, the room was) and more specific phrases like in addition are used to introduce a new aspect with the same sentiment. However, these cues reveal important information about the aspect of the EDU, and since they are associated with the language model ˜φ, they are not visible anymore to the language model of aspects φ. Cues for the relation AltSame also include Discourse Discourse Cues relation SameAlt the location is , the room was, the hotel has, and the room, and the bed, breakfast was, the staff were, in addition, good luck AltSame but, and, it was, and it was, and they, although, and it, but it, but it was, however, which was, this is, this was, they were, the only thing, even though, unfortunately, needless to say, fortunately AltAlt the room was, the staff were, the only, the hotel is, but the, however, also, or, overall i, unfortunately, we will definitely, on the plus, the only downside , even though, and even though, i would definately Table 6: Induced cues from the discourse relations phrases that contain some anaphoric expressions, which might refer to previous mentions of an aspect in the discourse (i.e. previous EDU). We expect that since there is an anaphoric expression, explicit lexical features for the aspect will be missing, making thus the decision concerning aspect assignment ambiguous for any discourse-agnostic model. Interestingly, we found the expressions unfortunately, fortunately, the only thing in the same relation, since all indicate a change in sentiment. Finally, AltAlt can be viewed as a mixture of the other two relations. Furthermore, for this relation we can find expressions that tend to be used at the end of a review, since at this point we normally change the aspect and often even sentiment. Some examples of these cases are overall, we will definitely and even the misspelled version of the latter i would definately. 5.3 Features in supervised learning As an additional experiment to demonstrate informative of the output of the two models, we design a supervised learning task of predicting sentiment and topic of EDUs. In this setting, the feature vector of every EDU consists of its bagof-word-representation to which we add two extra features; the models’ predictions of topic and sentiment. We train a support vector machine with a polynomial kernel using the default parameters of Weka8 and perform 10-fold cross-validation. Table 7 presents results of this analysis in terms of accuracy for four classification tasks, i.e. predicting both sentiment and topic, only sentiment and only topic for all EDUs, as well as predicting sentiment and topic for the “marked” dataset. First, we observe that incorporation of the topic8http://www.cs.waikato.ac.nz/ml/weka/ 1636 Features aspect+sentiment aspect sentiment Marked only (28 classes) (10 classes) (3 classes) sentiment+aspect (28 classes) only unigrams 36.3 49.8 57.1 26.2 unigrams + SentAsp 38.0 50.4 59.3 27.8 unigrams + Discourse 39.1 52.4 59.4 29.1 Table 7: Supervised learning at the EDU level (accuracy) model features on a unigram-only model results in an improvement in classification performance across all tasks (predicting sentiment, predicting aspects, or both); as a matter of fact, our accuracy results for predicting sentiment are comparable to the sentence-level results presented by T¨ackstr¨om and McDonald (2011). We have to stress that accuracies for the joint task (i.e. predicting both sentiment and topic) are expected to be lower since it can also be seen as the product of the two other tasks (i.e. predicting only sentiment and only topic). We also observe that the features induced from the Discourse model result in higher accuracy than the ones from the discourseagnostic model SentAsp both in the complete set of EDUs and the “marked” subset, results that are in line with the ones presented in Table 4. Finally, the fact that the results for the complete set of EDUs are higher than the ones for the “marked” dataset clearly suggests that the latter constitute a hard case for sentiment analysis, in which exploiting discourse signal proves to be beneficial. 6 Related Work Recently, there has been significant interest in leveraging content structure for a number of NLP tasks (Webber et al., 2011). Sentiment analysis has not been an exception to this and discourse has been used in order to enforce constraints on the assignment of polarity labels at several granularity levels, ranging from the lexical level (Polanyi and Zaenen, 2006) to the review level (Taboada et al., 2011). One way to deal with this problem is to model the interactions by using a precompiled set of polarity shifters (Nakagawa et al., 2010; Polanyi and Zaenen, 2006; Sadamitsu et al., 2008). Socher et al. (2011) defined a recurrent neural network model, which, in essence, learns those polarity shifters relying on sentence-level sentiment labels. Though successful, this model is unlikely to capture intra-sentence non-local phenomena such as effect of discourse connectives, unless it is provided with syntactic information as an input. This may be problematic for the noisy sentiment-analysis domain and especially for poor-resource languages. Similar to our work, others have focused on modeling interactions between phrases and sentences. However, this has been achieved by either using a subset of relations that can be found in discourse theories (Zhou et al., 2011; Asher et al., 2008; Snyder and Barzilay, 2007) or by using directly (Taboada et al., 2008) the output of discourse parsers (Soricut and Marcu, 2003). Discourse cues as predictive features of topic boundaries have also been considered in Eisenstein and Barzilay (2008). This work was extended by Trivedi and Eisenstein (2013), where discourse connectors are used as features for modeling subjectivity transitions. Another related line of research was presented in Somasundaran et al. (2009) where a domainspecific discourse scheme is considered. Similarly to our set-up, discourse relations enforce constraints on sentiment polarity of associated sentiment expressions. Somasundaran et al. (2009) show that gold-standard discourse information encoded in this way provides a useful signal for prediction of sentiment, but they leave automatic discourse relation prediction for future work. They use an integer linear programming framework to enforce agreement between classifiers and soft constraints provided by discourse annotations. This contrasts with our work; we do not rely on expert discourse annotation, but rather induce both discourse relations and cues jointly with aspect and sentiment. 7 Conclusions and Future Work In this work, we showed that by jointly inducing discourse information in the form of discourse cues, we can achieve better predictions for aspectspecific sentiment polarity. Our contribution consists in proposing a general way of how discourse information can be integrated in any LDA-style discourse-agnostic model of aspect and sentiment. In the future, we aim at modeling more flexible sets of discourse relations and automatically inducing discourse segmentation relevant to the task. 1637 References Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the SemEval, pages 7–12. Nicholas Asher, Farah Benamara, and Yvette Yannick Mathieu. 2008. Distilling opinion in discourse: A preliminary study. Proceedings of Coling, pages 5– 8. Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Proceedings of NAACL, pages 804–812. Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of EMNLP, pages 334–343. Emily B Fox, Erik B Sudderth, Michael I Jordan, and Alan S Willsky. 2008. An HDP-HMM for systems with state persistence. In Proceedings of ICML. Gayatree Ganu, Noemie Elhadad, and Amelie Marian. 2009. Beyond the stars: Improving rating predictions using review text content. In Proceedings of WebDB. Sharon Goldwater, Thomas L Griffiths, and Mark Johnson. 2009. A bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21–54. Thomas L Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences of the United States of America, 101(Suppl 1):5228–5235. Geoffrey E Hinton. 1999. Products of experts. In Proceedings of ICANN, volume 1, pages 1–6. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD, pages 168–177. Jagadeesh Jagarlamudi, Hal Daum´e III, and Raghavendra Udupa. 2012. Incorporating lexical priors into topic models. Proceedings of EACL, pages 204– 213. Yohan Jo and Alice H Oh. 2011. Aspect and sentiment unification model for online review analysis. In Proceedings of WSDM, pages 815–824. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceeding of CIKM, pages 375–384. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of WWW, pages 171–180. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using crfs with hidden variables. In Proceedings of NAACL, pages 786–794. Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, and Regina Barzilay. 2009. Multilingual part-of-speech tagging: Two unsupervised approaches. Journal of Artificial Intelligence Research, 36(1):341–385. Livia Polanyi and Annie Zaenen. 2006. Contextual valence shifters. Computing attitude and affect in text: Theory and applications, pages 1–10. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of LREC. Kugatsu Sadamitsu, Satoshi Sekine, and Mikio Yamamoto. 2008. Sentiment analysis based on probabilistic models using inter-sentence information. In Proceedings of ACL, pages 2892–2896. Benjamin Snyder and Regina Barzilay. 2007. Multiple aspect ranking using the good grief algorithm. In Proceedings of HLT-NAACL, pages 300–307. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP, pages 151–161. Swapna Somasundaran, Janyce Wiebe, and Josef Ruppenhofer. 2008. Discourse level opinion interpretation. In Proceedings of Coling, pages 801–808. Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification. In Proceedings of EMNLP, pages 170–179. Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of NAACL, pages 149–156. Maite Taboada, Kimberly Voll, and Julian Brooke. 2008. Extracting sentiment as a function of discourse structure and topicality. Simon Fraser University, Tech. Rep, 20. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexiconbased methods for sentiment analysis. Computational Linguistics, 37(2):267–307. Oscar T¨ackstr¨om and Ryan McDonald. 2011. Semisupervised latent variable models for sentence-level sentiment analysis. In Proceedings of ACL, pages 569–574. Ivan Titov and Ryan McDonald. 2008a. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL, pages 308–316. 1638 Ivan Titov and Ryan McDonald. 2008b. Modeling online reviews with multi-grain topic models. In Proceedings of WWW, pages 112–120. Rakshit Trivedi and Jacob Eisenstein. 2013. Discourse connectors for latent subjectivity in sentiment analysis. In In Proceedings of NAACL. Peter D Turney and Michael L Littman. 2002. Unsupervised learning of semantic orientation from a hundred-billion-word corpus. Kimberly Voll and Maite Taboada. 2007. Not all words are created equal: Extracting semantic orientation as a function of adjective relevance. In Proceedings of Australian Conf. on AI. Bonnie Webber, Markus Egg, and Valia Kordoni. 2011. Discourse structure and language technology. Natural Language Engineering, 1(1):1–54. Lanjun Zhou, Binyang Li, Wei Gao, Zhongyu Wei, and Kam-Fai Wong. 2011. Unsupervised discovery of discourse relations for eliminating intra-sentence polarity ambiguities. In Proceedings EMNLP, pages 162–171. 1639
2013
160
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1640–1649, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Joint Inference for Fine-grained Opinion Extraction Bishan Yang Department of Computer Science Cornell University [email protected] Claire Cardie Department of Computer Science Cornell University [email protected] Abstract This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction. 1 Introduction Fine-grained opinion analysis is concerned with identifying opinions in text at the expression level; this includes identifying the subjective (i.e., opinion) expression itself, the opinion holder and the target of the opinion (Wiebe et al., 2005). The task has received increasing attention as many natural language processing applications would benefit from the ability to identify text spans that correspond to these key components of opinions. In question-answering systems, for example, users may submit questions in the form “What does entity A think about target B?”; opinion-oriented summarization systems also need to recognize opinions and their targets and holders. In this paper, we address the task of identifying opinion-related entities and opinion relations. We consider three types of opinion entities: opinion expressions or direct subjective expressions as defined in Wiebe et al. (2005) — expressions that explicitly indicate emotions, sentiment, opinions or other private states (Quirk et al., 1985) or speech events expressing private states; opinion targets — expressions that indicate what the opinion is about; and opinion holders — mentions of whom or what the opinion is from. Consider the following examples in which opinion expressions (O) are underlined and targets (T) and holders (H) of the opinion are bracketed. S1: [The workers][H1,2] were irked[O1] by [the government report][T1] and were worried[O2] as they went about their daily chores. S2: From the very start it could be predicted[O1] that on the subject of economic globalization, [the developed states][T1,2] were going to come across fierce opposition[O2]. The numeric subscripts denote linking relations, one of IS-ABOUT or IS-FROM. In S1, for instance, opinion expression “were irked” (O1) ISABOUT “the government report” (T1). Note that the IS-ABOUT relation can contain an empty target (e.g. “were worried” in S1); similarly for ISFROM w.r.t. the opinion holder (e.g. “predicted” in S2). We also allow an opinion entity to be involved in multiple relations (e.g. “the developed states” in S2). Not surprisingly, fine-grained opinion extraction is a challenging task due to the complexity and variety of the language used to express opinions and their components (Pang and Lee, 2008). Nevertheless, much progress has been made in extracting opinion information from text. Sequence labeling models have been successfully employed to identify opinion expressions (e.g. (Breck et al., 1640 2007; Yang and Cardie, 2012)) and relation extraction techniques have been proposed to extract opinion holders and targets based on their linking relations to the opinion expressions (e.g. Kim and Hovy (2006), Kobayashi et al. (2007)). However, most existing work treats the extraction of different opinion entities and opinion relations in a pipelined manner: the interaction between different extraction tasks is not modeled jointly and error propagation is not considered. One exception is Choi et al. (2006), which proposed an ILP approach to jointly identify opinion holders, opinion expressions and their IS-FROM linking relations, and demonstrated the effectiveness of joint inference. Their ILP formulation, however, does not handle implicit linking relations, i.e. opinion expressions with no explicit opinion holder; nor does it consider IS-ABOUT relations. In this paper, we present a model that jointly identifies opinion-related entities, including opinion expressions, opinion targets and opinion holders as well as the associated opinion linking relations, IS-ABOUT and IS-FROM. For each type of opinion relation, we allow implicit (i.e. empty) arguments for cases when the opinion holder or target is not explicitly expressed in text. We model entity identification as a sequence tagging problem and relation extraction as binary classification. A joint inference framework is proposed to jointly optimize the predictors for different subproblems with constraints that enforce global consistency. We hypothesize that the ambiguity in the extraction results will be reduced and thus, performance increased. For example, uncertainty w.r.t. the spans of opinion entities can adversely affect the prediction of opinion relations; and evidence of opinion relations might provide clues to guide the accurate extraction of opinion entities. We evaluate our approach using a standard corpus for fine-grained opinion analysis (the MPQA corpus (Wiebe et al., 2005)) and demonstrate that our model outperforms by a significant margin traditional baselines that do not employ joint inference for extracting opinion entities and different types of opinion relations. 2 Related Work Significant research effort has been invested into fine-grained opinion extraction for open-domain text such as news articles (Wiebe et al., 2005; Wilson et al., 2009). Many techniques were proposed to identify the text spans for opinion expressions (e.g. (Breck et al., 2007; Johansson and Moschitti, 2010b; Yang and Cardie, 2012)), opinion holders (e.g. (Choi et al., 2005)) and topics of opinions (Stoyanov and Cardie, 2008). Some consider extracting opinion targets/holders along with their relation to the opinion expressions. Kim and Hovy (2006) identifies opinion holders and targets by using their semantic roles related to opinion words. Ruppenhofer et al. (2008) argued that semantic role labeling is not sufficient for identifying opinion holders and targets. Johansson and Moschitti (2010a) extract opinion expressions and holders by applying reranking on top of sequence labeling methods. Kobayashi et al. (2007) considered extracting “aspect-evaluation” relations (relations between opinion expressions and targets) by identifying opinion expressions first and then searching for the most likely target for each opinion expression via a binary relation classifier. All these methods extract opinion arguments and opinion relations in separate stages instead of extracting them jointly. Most similar to our method is Choi et al. (2006), which jointly extracts opinion expressions, holders and their IS-FROM relations using an ILP approach. In contrast, our approach (1) also considers the IS-ABOUT relation which is arguably more complex due to the larger variety in the syntactic structure exhibited by opinion expressions and their targets, (2) handles implicit opinion relations (opinion expressions without any associated argument), and (3) uses a simpler ILP formulation. There has also been substantial interest in opinion extraction from product reviews (Liu, 2012). Most existing approaches focus on the extraction of opinion targets and their associated opinion expressions and usually employ a pipeline architecture: generate candidates of opinion expressions and opinion targets first, and then use rule-based or machine-learning-based approaches to identify potential relations between opinions and targets (Hu and Liu, 2004; Wu et al., 2009; Liu et al., 2012). In addition to pipeline approaches, bootstrapping-based approaches were proposed (Qiu et al., 2009; Qiu et al., 2011; Zhang et al., 2010) to identify opinion expressions and targets iteratively; however, they suffer from the problem of error propagation. There is much work demonstrating the benefit of performing global inference. Roth and Yih 1641 (2004) proposed a global inference approach in the formulation of a linear program (LP) and applied it to the task of extracting named entities and relations simultaneously. Their problem is similar to ours — the difference is that Roth and Yih Roth and Yih (2004) assume that named entity spans are known a priori and only their labels need to be assigned. Joint inference has also been applied to semantic role labeling (Punyakanok et al., 2008; Srikumar and Roth, 2011; Das et al., 2012), where the goal is to jointly identify semantic arguments for given lexical predicates. The problem is conceptually similar to identifying opinion arguments for opinion expressions, however, we do not assume prior knowledge of opinion expressions (unlike in SRL, where predicates are given). 3 Model As proposed in Section 1, we consider the task of jointly identifying opinion entities and opinion relations. Specifically, given a sentence, our goal is to identify spans of opinion expressions, opinion arguments (targets and holders) and their associated linking relations. Training data consists of text with manually annotated opinion expression and argument spans, each with a list of relation ids specifying the linking relation between opinion expressions and their arguments. In this section, we will describe how we model opinion entity identification and opinion relation extraction, and how we combine them in a joint inference model. 3.1 Opinion Entity Identification We formulate the task of opinion entity identification as a sequence labeling problem and employ conditional random fields (CRFs) (Lafferty et al., 2001) to learn the probability of a sequence assignment y for a given sentence x. Through inference we can find the best sequence assignment for sentence x and recover the opinion entities according to the standard “IOB” encoding scheme. We consider four entity labels: D, T, H, N, where D denotes opinion expressions, T denotes opinion targets, H denotes opinion holders and N denotes “NONE” entities. We define potential function fiz that gives the probability of assigning a span i with entity label z, and the probability is estimated based on the learned parameters from CRFs. Formally, given a within-sentence span i = (a, b), where a is the starting position and b is the end position, and label z ∈{D, T, H}, we have fiz = p(ya = Bz, ya+1 = Iz, ..., yb = Iz, yb+1 ̸= Iz|x) fiN = p(ya = O, ..., yb = O|x) These probabilities can be efficiently computed using the forward-backward algorithm. 3.2 Opinion Relation Extraction We consider extracting the IS-ABOUT and ISFROM opinion relations. In the following we will not distinguish these two relations, since they can both be characterized as relations between opinion expressions and opinion arguments, and the methods for relation extraction are the same. We treat the relation extraction problem as a combination of two binary classification problems: opinion-arg classification, which decides whether a pair consisting of an opinion candidate o and an argument candidate a forms a relation; and opinion-implicit-arg classification, which decides whether an opinion candidate o is linked to an implicit argument, i.e. no argument is mentioned. We define a potential function r to capture the strength of association between an opinion candidate o and an argument candidate a, roa = p(y = 1|x) −p(y = 0|x) where p(y = 1|x) and p(y = 0|x) are the logistic regression estimates of the positive and negative relations. Similarly, we define potential ro∅to denote the confidence of predicting opinion span o associated with an implicit argument. 3.2.1 Opinion-Arg Relations For opinion-arg classification, we construct candidates of opinion expressions and opinion arguments and consider each pair of an opinion candidate and an argument candidate as a potential opinion relation. Conceptually, all possible subsequences in the sentence are candidates. To filter out candidates that are less reasonable, we consider the opinion expressions and arguments obtained from the n-best predictions by CRFs1. We also employ syntactic patterns from dependency 1We randomly split the training data into 10 parts and obtained the 50-best CRF predictions on each part for the generation of candidates. We also experimented with candidates generated from more CRF predictions, but did not find any performance improvement for the task. 1642 trees to generate candidates. Specifically, we selected the most common patterns of the shortest dependency paths2 between an opinion candidate o and an argument candidate a in our dataset, and include all pairs of candidates that satisfy at least one dependency pattern. For the IS-ABOUT relation, the top three patterns are (1) o ↑dobj a, (2) o ↑ccomp x ↑nsubj a (x is a word in the path that is not covered by either o nor a), (3) o ↑ccomp a; for the IS-FROM relation, the top three patterns are (1) o ↑nsubj a, (2) o ↑poss a, (3) o ↓ccomp x ↑nsubj a. Note that generating candidates this way will give us a large number of negative examples. Similar to the preprocessing approach in (Choi et al., 2006), we filter pairs of opinion and argument candidates that do not overlap with any gold standard relation in our training data. Many features we use are common features in the SRL tasks (Punyakanok et al., 2008) due to the similarity of opinion relations to the predicate-argument relations in SRL (Ruppenhofer et al., 2008; Choi et al., 2006). In general, the features aim to capture (a) local properties of the candidate opinion expressions and arguments and (b) syntactic and semantic attributes of their relation. Words and POS tags: the words contained in the candidate and their POS tags. Lexicon: For each word in the candidate, we include its WordNet hypernyms and its strength of subjectivity in the Subjectivity Lexicon3 (e.g. weaksubj, strongsubj). Phrase type: the syntactic category of the deepest constituent that covers the candidate in the parse tree, e.g. NP, VP. Semantic frames: For each verb in the opinion candidate, we include its frame types according to FrameNet4. Distance: the relative distance (number of words) between the opinion and argument candidates. Dependency Path: the shortest path in the dependency tree between the opinion candidate and the target candidate, e.g. ccomp↑nsubj↑. We also include word types and POS types in the paths, e.g. opinion↑ccompsuffering↑nsubjpatient, 2We use the Stanford Parser to generate parse trees and dependency graphs. 3http://mpqa.cs.pitt.edu/lexicons/ subj_lexicon/ 4https://framenet.icsi.berkeley.edu/ fndrupal/ NN↑ccompVBG↑nsubjNN. The dependency path has been shown to be very useful in extracting opinion expressions and opinion holders (Johansson and Moschitti, 2010a). 3.2.2 Opinion-Implicit-Arg Relations When the opinion-arg relation classifier predicts that there is no suitable argument for the opinion expression candidate, it does not capture the possibility that an opinion candidate may associate with an implicit argument. To incorporate knowledge of implicit relations, we build an opinion-implicitarg classifier to identify an opinion candidate with an implicit argument based on its own properties and context information. For training, we consider all gold-standard opinion expressions as training examples — including those with implicit arguments — as positive examples and those associated with explicit arguments as negative examples. For features, we use words, POS tags, phrase types, lexicon and semantic frames (see Section 3.2.1 for details) to capture the properties of the opinion expression, and also features that capture the context of the opinion expression: Neighboring constituents: The words and grammatical roles of neighboring constituents of the opinion expression in the parse tree — the left and right sibling of the deepest constituent containing the opinion expression in the parse tree. Parent Constituent: The grammatical role of the parent constituent of the deepest constituent containing the opinion expression. Dependency Argument: The word types and POS types of the arguments of the dependency patterns in which the opinion expression is involved. We consider the same dependency patterns that are used to generate candidates for opinion-arg classification. 3.3 Joint Inference The inference goal is to find the optimal prediction for both opinion entity identification and opinion relation extraction. For a given sentence, we denote O as a set of opinion candidates, Ak as a set of argument candidates, where k denotes the type of opinion relation — IS-ABOUT or IS-FROM — and S as a set of within-sentence spans that cover all of the opinion candidates and argument can1643 didates. We introduce binary variable xiz, where xiz = 1 means span i is associated with label z. We also introduce binary variable uij for every pair of opinion candidate i and argument candidate j, where uij = 1 means i forms an opinion relation with j, and binary variable vik for every opinion candidate i in relation type k, where vik = 1 means i associates with an implicit argument in relation k. Given the binary variables xiy, uij, vik, it is easy to recover the entity and relation assignment by checking which spans are labeled as opinion entities, and which opinion span and argument span form an opinion relation. The objective function is defined as a linear combination of the potentials from different predictors with a parameter λ to balance the contribution of two components: opinion entity identification and opinion relation extraction. arg max x,u,v λ X i∈S X z fizxiz + (1 −λ) X k X i∈O  X j∈Ak rijuij + ri∅vik   (1) It is subject to the following linear constraints: Constraint 1: Uniqueness. For each span i, we must assign one and only one label z, where z ∈ {H, D, T, N}. X z xiz = 1 Constraint 2: Non-overlapping. If two spans i and j overlap, then at most one of the spans can be assigned to a non-NONE entity label: H, D, T. X z̸=N xiz + X z̸=N xjz ≤1 Constraint 3: Consistency between the opinionarg and opinion-implicit-arg classifiers. For an opinion candidate i, if it is predicted to have an implicit argument in relation k, vik = 1, then no argument candidate should form a relation with i. If vik = 0, then there exists some argument candidate j ∈Ak such that uij = 1. We introduce two auxiliary binary variables aik and bik to limit the maximum number of relations associated with each opinion candidate to be less than or equal to three5. When vik = 1, aik and bik have to be 0. X j∈Ak uij = 1 −vik + aik + bik aik ≤1 −vik, bik ≤1 −vik Constraint 4: Consistency between opinion-arg classifier and opinion entity extractor. Suppose an argument candidate j in relation k is assigned an argument label by the entity extractor, that is xjz = 1 (z = T for IS-ABOUT relation and z = H for IS-FROM relation), then there exists some opinion candidates that associate with j. Similar to constraint 3, we introduce auxiliary binary variables cj and dj to enforce that an argument j links to at most three opinion expressions. If xjz = 0, then no relations should be extracted for j. X i∈O uij = xjz + cjk + djk cjk ≤xjz, djk ≤xjz Constraint 5: Consistency between the opinionimplicit-arg classifier and opinion entity extractor. When an opinion candidate i is predicted to associate with an implicit argument in relation k, that is vik = 1, then we allow xiD to be either 1 or 0 depending on the confidence of labeling i as an opinion expression. When vik = 0, there exisits some opinion argument associated with the opinion candidate, and we enforce xiD = 1, which means the entity extractor agrees to label i as an opinion expression. vik + xiD ≥1 Note that in our ILP formulation, the label assignment for a candidate span involves one multiple-choice decision among different opinion entity labels and the “NONE” entity label. The scores of different label assignments are comparable for the same span since they come from one entity extraction model. This makes our ILP formulation advantageous over the ILP formulation proposed in Choi et al. (2006), which needs m binary decisions for a candidate span, where m is the number of types of opinion entities, and the score for each possible label assignment is obtained by 5It is possible to add more auxiliary variables to allow more than three arguments to link to an opinion expression, but this rarely happens in our experiments. For the IS-FROM relation, we set aik = 0, bik = 0 since an opinion expression usually has only one holder. 1644 the sum of raw scores from m independent extraction models. This design choice also allows us to easily deal with multiple types of opinion arguments and opinion relations. 4 Experiments For evaluation, we used version 2.0 of the MPQA corpus (Wiebe et al., 2005; Wilson, 2008), a widely used data set for fine-grained opinion analysis.6 We considered the subset of 482 documents7 that contain attitude and target annotations. There are a total of 9,471 sentences with opinionrelated labels at the phrase level. We set aside 132 documents as a development set and use 350 documents as the evaluation set. All experiments employ 10-fold cross validation on the evaluation set; the average over the 10 runs is reported. Our gold standard opinion expressions, opinion targets and opinion holders correspond to the direct subjective annotations, target annotations and agent annotations, respectively. The IS-FROM relation is obtained from the agent attribute of each opinion expression. The IS-ABOUT relation is obtained from the attitude annotations: each opinion expression is annotated with attitude frames and each attitude frame is associated with a list of targets. The relations may overlap: for example, in the following sentence, the target of relation 1 contains relation 2. [John]H1 is happyO1 because [[he]H2 lovesO2 [being at Enderly Park]T2]T1. We discard relations that contain sub-relations because we believe that identifying the sub-relations usually is sufficient to recover the discarded relations. (Prediction of overlapping relations is considered as future work.) In the example above, we will identify (loves, being at Enderly Park) as an IS-ABOUT relation and happy as an opinion expression associated with an implicit target. Table 1 shows some statistics of the corpus. We adopted the evaluation metrics for entity and relation extraction from Choi et al. (2006), which include precision, recall, and F1-measure according to overlap and exact matching metrics.8 We 6Available at http://www.cs.pitt.edu/mpqa/. 7349 news articles from the original MPQA corpus, 84 Wall Street Journal articles (Xbank), and 48 articles from the American National Corpus. 8Overlap matching considers two spans to match if they overlap, while exact matching requires two spans to be exactly the same. Opinion Target Holder TotalNum 5849 4676 4244 Opinion-arg Relations Implicit Relations IS-ABOUT 4823 1302 IS-FROM 4662 1187 Table 1: Data Statistics of the MPQA Corpus. will focus our discussion on results obtained using overlap matching, since the exact boundaries of opinion entities are hard to define even for human annotators (Wiebe et al., 2005). We trained CRFs for opinion entity identification using the following features: indicators for words, POS tags, and lexicon features (the subjectivity strength of the word in the Subjectivity Lexicon). All features are computed for the current token and tokens in a [−1, +1] window. We used L2-regularization; the regularization parameter was tuned using the development set. We trained the classifiers for relation extraction using L1-regularized logistic regression with default parameters using the LIBLINEAR (Fan et al., 2008) package. For joint inference, we used GLPK9 to provide the optimal ILP solution. The parameter λ was tuned using the development set. 4.1 Baseline Methods We compare our approach to several pipeline baselines. Each extracts opinion entities first using the same CRF employed in our approach, and then predicts opinion relations on the opinion entity candidates obtained from the CRF prediction. Three relation extraction techniques were used in the baselines: • Adj: Inspired by the adjacency rule used in Hu and Liu (2004), it links each argument candidate to its nearest opinion candidate. Arguments that do not link to any opinion candidate are discarded. This is also used as a strong baseline in Choi et al. (2006). • Syn: Links pairs of opinion and argument candidates that present prominent syntactic patterns. (We consider the syntactic patterns listed in Section 3.2.1.) Previous work also demonstrates the effectiveness of syntactic information in opinion extraction (Johansson and Moschitti, 2012). 9http://www.gnu.org/software/glpk/ 1645 Opinion Expression Opinion Target Opinion Holder Method P R F1 P R F1 P R F1 CRF 82.21 66.15 73.31 73.22 48.58 58.41 72.32 49.09 58.48 CRF+Adj 82.21 66.15 73.31 80.87 42.31 55.56 75.24 48.48 58.97 CRF+Syn 82.21 66.15 73.31 81.87 30.36 44.29 78.97 40.20 53.28 CRF+RE 83.02 48.99 61.62 85.07 22.01 34.97 78.13 40.40 53.26 Joint-Model 71.16 77.85 74.35∗ 75.18 57.12 64.92∗∗ 67.01 66.46 66.73∗∗ CRF 66.60 52.57 58.76 44.44 29.60 35.54 65.18 44.24 52.71 CRF+Adj 66.60 52.57 58.76 49.10 25.81 33.83 68.03 43.84 53.32 CRF+Syn 66.60 52.57 58.76 50.26 18.41 26.94 74.60 37.98 50.33 CRF+RE 69.27 40.09 50.79 60.45 15.37 24.51 75 38.79 51.13 Joint-Model 57.39 62.40 59.79∗ 49.15 38.33 43.07∗∗ 62.73 62.22 62.47∗∗ Table 2: Performance on opinion entity extraction using overlap and exact matching metrics (the top table uses overlap and the bottom table uses exact). Two-tailed t-test results are shown on F1 measure for our method compared to the other baselines (statistical significance is indicated with ∗(p < 0.05), ∗∗(p < 0.005)). IS-ABOUT IS-FROM Method P R F1 P R F1 CRF+Adj 73.65 37.34 49.55 70.22 41.58 52.23 CRF+Syn 76.21 28.28 41.25 77.48 36.63 49.74 CRF+RE 78.26 20.33 32.28 74.81 37.55 50.00 CRF+Adj-merged-10-best 25.05 61.18 35.55 30.28 62.82 40.87 CRF+Syn-merged-10-best 41.60 45.66 43.53 48.08 54.03 50.88 CRF+RE-merged-10-best 51.60 33.09 40.32 47.73 54.40 50.84 Joint-Model 64.38 51.20 57.04∗∗ 64.97 58.61 61.63∗∗ Table 3: Performance on opinion relation extraction using the overlap metric. • RE: Predicts opinion relations by employing the opinion-arg classifier and opinionimplicit-arg classifier. First, the opinion-arg classifier identifies pairs of opinion and argument candidates that form valid opinion relations, and then the opinion-implicit-arg classifier is used on the remaining opinion candidates to further identify opinion expressions without explicit arguments. We report results using opinion entity candidates from the best CRF output and from the merged 10-best CRF output.10 The motivation of merging the 10-best output is to increase recall for the pipeline methods. 5 Results Table 2 shows the results of opinion entity identification using both overlap and exact metrics. We compare our approach with the pipeline baselines and CRF (the first step of the pipeline). We can see that our joint inference approach significantly outperforms all the baselines in F1 measure on extracting all types of opinion entities. In general, 10It is similar to the merged 10-best baseline in Choi et al. (2006). If an entity Ei extracted by the ith-best sequence overlaps with an entity Ej extracted by the jth-best sequence, where i ≤j, then we discard Ej. If Ei and Ej do not overlap, then we consider both entities. by adding the relation extraction step, the pipeline baselines are able to improve precision over the CRF but fail at recall. CRF+Syn and CRF+Adj provide the same performance as CRF, since the relation extraction step only affects the results of opinion arguments. By incorporating syntactic information, CRF+Syn provides better precision than CRF+Adj on extracting arguments at the expense of recall. This indicates that using simple syntactic rules would mistakenly filter many correct relations. By using binary classifiers to predict relations, CRF+RE produces high precision on opinion and target extraction but also results in very low recall. Using the exact metric, we observe the same general trend in the results as the overlap metric. The scores are lower since the metric is much stricter. Table 3 shows the results of opinion relation extraction using the overlap metric. We compare our approach with pipelined baselines in two settings: one employs relation extraction on 1-best output of CRF (top half of table) and the other employs the merged 10-best output of CRF (bottom half of table). We can see that in general, using merged 10-best CRF outputs boosts the recall while sacrificing precision. This is expected since merging the 10-best CRF outputs favors candidates that are 1646 IS-ABOUT Relation Extraction IS-FROM Relation Extraction Method P R F1 P R F1 ILP-W/O-ENTITY 49.10 40.48 44.38 44.77 58.24 50.63 ILP-W-SINGLE-RE 63.88 49.35 55.68 53.64 65.02 58.78 ILP-W/O-IMPLICIT-RE 62.00 44.73 51.97 73.23 51.28 60.32 Joint-Model 64.38 51.20 57.04∗∗ 64.97 58.61 61.63∗ Table 4: Comparison between our approach and ILP baselines that omit some potentials in our approach. believed to be more accurate by the CRF predictor. If CRF makes mistakes, the mistakes will propagate to the relation extraction step. The poor performance on precision further confirms the error propagation problem in the pipeline approaches. In contrast, our joint-inference method successfully boosts the recall while maintaining reasonable precision. This demonstrates that joint inference can effectively leverage the advantage of individual predictors and limit error propagation. To demonstrate the effectiveness of different potentials in our joint inference model, we consider three variants of our ILP formulation that omit some potentials in the joint inference: one is ILP-W/O-ENTITY, which extracts opinion relations without integrating information from opinion entity identification; one is ILP-W-SINGLE-RE, which focuses on extracting a single opinion relation and ignores the information from the other relation; the third one is ILP-W/O-IMPLICIT-RE, which omits the potential for opinion-implicit-arg relation and assumes every opinion expression is linked to an explicit argument. The objective function of ILP-W/O-ENTITY can be represented as arg max u X k X i∈O X j∈Ak rijuij (2) which is subject to constraints on uij to enforce relations to not overlap and limit the maximum number of relations that can be extracted for each opinion expression and each argument. For ILPW-SINGLE-RE, we simply remove the variables associated with one opinion relation in the objective function (1) and constraints. The formulation of ILP-W/O-IMPLICIT-RE removes the variables associated with potential ri in the objective function and corresponding constraints. It can be viewed as an extension to the ILP approach in Choi et al. (2006) that includes opinion targets and uses simpler ILP formulation with only one parameter and fewer binary variables and constraints to represent entity label assignments 11. 11We compared the proposed ILP formulation with the ILP Table 4 shows the results of these methods on opinion relation extraction. We can see that without the knowledge of the entity extractor, ILPW/O-ENTITY provides poor performance on both relation extraction tasks. This confirms the effectiveness of leveraging knowledge from entity extractor and relation extractor. The improvement yielded by our approach over ILP-W-SINGLE-RE demonstrates the benefit of jointly optimizing different types of opinion relations. Our approach also outperforms ILP-W/O-IMPLICIT-RE, which does not take into account implicit relations. The results demonstrate that incorporating knowledge of implicit opinion relations is important. 6 Discussion We note that the joint inference model yields a clear improvement on recall but not on precision compared to the CRF-based baselines. Analyzing the errors, we found that the joint model extracts comparable number of opinion entities compared to the gold standard, while the CRF-based baselines extract significantly fewer opinion entities (around 60% of the number of entities in the gold standard). With more extracted opinion entities, the precision is sacriced but recall is boosted substantially, and overall we see an increase in F-measure. We also found that a good portion of errors were made because the generated candidates failed to cover the correct solutions. Recall that the joint model finds the global optimal solution over a set of opinion entity and relation candidates, which are obtained from the n-best CRF predictions and constituents in the parse tree that satisfy certain syntactic patterns. It is possible that the generated candidates do not contain the gold standard answers. For example, our model failed to identify the IS-ABOUT relation (offers, general aid) from the following sentence Powell had contacted ... and received offersO1 of [genformulation in Choi et al. (2006) on extracting opinion holders, opinion expressions and IS-FROM relations, and showed that the proposed ILP formulation performs better on all three extraction tasks. 1647 eral aid]T1... because both the CRF predictor and syntactic heuristics fail to capture (offers, general aid) as a potential relation candidate. By applying simple heuristics such as treating all verbs or verb phrases as opinion candidates would not help because it would introduce a large number of negative candidates and lower the accuracy of relation extraction (only 52% of the opinion expressions are verbs or verb phrases and 64% of the opinion targets are noun or noun phrases in the corpus we used). Therefore a more effective candidate generation method is needed to allow more candidates while limiting the number of negative candidates. We also observed incorrect parsing to be a cause of error. We hope to study ways to account for such errors in our approach as future work. For computational time, our ILP formulation can be solved very efficiently using advanced ILP solvers. In our experiment, using GLPK’s branchand-cut solver took 0.2 seconds to produce optimal ILP solutions for 1000 sentences on a machine with Intel Core 2 Duo CPU and 4GB RAM. 7 Conclusion In this paper we propose a joint inference approach for extracting opinion-related entities and opinion relations. We decompose the task into different subproblems, and jointly optimize them using constraints that aim to encourage their consistency and reduce prediction uncertainty. We show that our approach can effectively integrate knowledge from different predictors and achieve significant improvements in overall performance for opinion extraction. For future work, we plan to extend our model to handle more complex opinion relations, e.g. nesting or cross-sentential relations. This can be potentially addressed by incorporating more powerful predictors and more complex linguistic constraints. Acknowledgments This work was supported in part by DARPA-BAA12-47 DEFT grant 12475008 and NSF grant BCS0904822. We thank Igor Labutov for helpful discussion and suggestions, Ainur Yessenalina for early discussion of the work, as well as the reviews for helpful comments. References E. Breck, Y. Choi, and C. Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of the 20th international joint conference on Artifical intelligence, pages 2683–2688. Morgan Kaufmann Publishers Inc. Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. Identifying sources of opinions with conditional random fields and extraction patterns. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 355–362. Association for Computational Linguistics. Y. Choi, E. Breck, and C. Cardie. 2006. Joint extraction of entities and relations for opinion recognition. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 431–439. Association for Computational Linguistics. D. Das, A.F.T. Martins, and N.A. Smith. 2012. An exact dual decomposition algorithm for shallow semantic parsing with constraints. Proceedings of* SEM.[ii, 10, 50]. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874. M. Hu and B. Liu. 2004. Mining opinion features in customer reviews. In Proceedings of the National Conference on Artificial Intelligence, pages 755–760. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. Richard Johansson and Alessandro Moschitti. 2010a. Reranking models in fine-grained opinion analysis. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 519–527. Association for Computational Linguistics. Richard Johansson and Alessandro Moschitti. 2010b. Syntactic and semantic structure for opinion expression detection. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 67–76. Association for Computational Linguistics. Richard Johansson and Alessandro Moschitti. 2012. Relational features in fine-grained opinion analysis. Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, pages 1– 8. Association for Computational Linguistics. N. Kobayashi, K. Inui, and Y. Matsumoto. 2007. Extracting aspect-evaluation and aspect-of relations in opinion mining. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural 1648 Language Learning (EMNLP-CoNLL), pages 1065– 1074. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. K. Liu, L. Xu, and J. Zhao. 2012. Opinion target extraction using word-based translation model. In Proceedings of the conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Now Pub. V. Punyakanok, D. Roth, and W. Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In Proceedings of the 21st international jont conference on Artifical intelligence, pages 1199–1204. Morgan Kaufmann Publishers Inc. G. Qiu, B. Liu, J. Bu, and C. Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, 37(1):9– 27. R. Quirk, S. Greenbaum, G. Leech, J. Svartvik, and D. Crystal. 1985. A comprehensive grammar of the English language, volume 397. Cambridge Univ Press. D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. Defense Technical Information Center. J. Ruppenhofer, S. Somasundaran, and J. Wiebe. 2008. Finding the sources and targets of subjective expressions. In Proceedings of LREC. Vivek Srikumar and Dan Roth. 2011. A joint model for extended semantic role labeling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 129–139. Association for Computational Linguistics. V. Stoyanov and C. Cardie. 2008. Topic identification for fine-grained opinion analysis. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 817–824. Association for Computational Linguistics. J. Wiebe, T. Wilson, and C. Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2):165– 210. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2009. Recognizing contextual polarity: An exploration of features for phrase-level sentiment analysis. Computational linguistics, 35(3):399–433. Theresa Wilson. 2008. Fine-Grained Subjectivity Analysis. Ph.D. thesis, Ph. D. thesis, University of Pittsburgh. Intelligent Systems Program. Y. Wu, Q. Zhang, X. Huang, and L. Wu. 2009. Phrase dependency parsing for opinion mining. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3, pages 1533–1541. Association for Computational Linguistics. B. Yang and C. Cardie. 2012. Extracting opinion expressions with semi-markov conditional random fields. In Proceedings of the conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1462–1470. Association for Computational Linguistics. 1649
2013
161
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1650–1659, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Linguistic Models for Analyzing and Detecting Biased Language Marta Recasens Stanford University [email protected] Cristian Danescu-Niculescu-Mizil Stanford University Max Planck Institute SWS [email protected] Dan Jurafsky Stanford University [email protected] Abstract Unbiased language is a requirement for reference sources like encyclopedias and scientific texts. Bias is, nonetheless, ubiquitous, making it crucial to understand its nature and linguistic realization and hence detect bias automatically. To this end we analyze real instances of human edits designed to remove bias from Wikipedia articles. The analysis uncovers two classes of bias: framing bias, such as praising or perspective-specific words, which we link to the literature on subjectivity; and epistemological bias, related to whether propositions that are presupposed or entailed in the text are uncontroversially accepted as true. We identify common linguistic cues for these classes, including factive verbs, implicatives, hedges, and subjective intensifiers. These insights help us develop features for a model to solve a new prediction task of practical importance: given a biased sentence, identify the bias-inducing word. Our linguistically-informed model performs almost as well as humans tested on the same task. 1 Introduction Writers and editors of reference works such as encyclopedias, textbooks, and scientific articles strive to keep their language unbiased. For example, Wikipedia advocates a policy called neutral point of view (NPOV), according to which articles should represent “fairly, proportionately, and as far as possible without bias, all significant views that have been published by reliable sources” (Wikipedia, 2013b). Wikipedia’s style guide asks editors to use nonjudgmental language, to indicate the relative prominence of opposing points of view, to avoid presenting uncontroversial facts as mere opinion, and, conversely, to avoid stating opinions or contested assertions as facts. Understanding the linguistic realization of bias is important for linguistic theory; automatically detecting these biases is equally significant for computational linguistics. We propose to address both by using a powerful resource: edits in Wikipedia that are specifically designed to remove bias. Since Wikipedia maintains a complete revision history, the edits associated with NPOV tags allow us to compare the text in its biased (before) and unbiased (after) form, helping us better understand the linguistic realization of bias. Our work thus shares the intuition of prior NLP work applying Wikipedia’s revision history (Nelken and Yamangil, 2008; Yatskar et al., 2010; Max and Wisniewski, 2010; Zanzotto and Pennacchiotti, 2010). The analysis of Wikipedia’s edits provides valuable linguistic insights into the nature of biased language. We find two major classes of bias-driven edits. The first, framing bias, is realized by subjective words or phrases linked with a particular point of view. In (1), the term McMansion, unlike homes, appeals to a negative attitude toward large and pretentious houses. The second class, epistemological bias, is related to linguistic features that subtly (often via presupposition) focus on the believability of a proposition. In (2), the assertive stated removes the bias introduced by claimed, which casts doubt on Kuypers’ statement. (1) a. Usually, smaller cottage-style houses have been demolished to make way for these McMansions. b. Usually, smaller cottage-style houses have been demolished to make way for these homes. (2) a. Kuypers claimed that the mainstream press in America tends to favor liberal viewpoints. b. Kuypers stated that the mainstream press in America tends to favor liberal viewpoints. Bias is linked to the lexical and grammatical cues identified by the literature on subjectivity (Wiebe et al., 2004; Lin et al., 2011), sentiment (Liu et al., 2005; Turney, 2002), and especially stance 1650 or “arguing subjectivity” (Lin et al., 2006; Somasundaran and Wiebe, 2010; Yano et al., 2010; Park et al., 2011; Conrad et al., 2012). For example, like stance, framing bias is realized when the writer of a text takes a particular position on a controversial topic and uses its metaphors and vocabulary. But unlike the product reviews or debate articles that overtly use subjective language, editors in Wikipedia are actively trying to avoid bias, and hence biases may appear more subtly, in the form of covert framing language, or presuppositions and entailments that may not play as important a role in other genres. Our linguistic analysis identifies common classes of these subtle bias cues, including factive verbs, implicatives and other entailments, hedges, and subjective intensifiers. Using these cues could help automatically detect and correct instances of bias, by first finding biased phrases, then identifying the word that introduces the bias, and finally rewording to eliminate the bias. In this paper we propose a solution for the second of these tasks, identifying the bias-inducing word in a biased phrase. Since, as we show below, this task is quite challenging for humans, our system has the potential to be very useful in improving the neutrality of reference works like Wikipedia. Tested on a subset of non-neutral sentences from Wikipedia, our model achieves 34% accuracy—and up to 59% if the top three guesses are considered—on this difficult task, outperforming four baselines and nearing humans tested on the same data. 2 Analyzing a Dataset of Biased Language We begin with an empirical analysis based on Wikipedia’s bias-driven edits. This section describes the data, and summarizes our linguistic analysis.1 2.1 The NPOV Corpus from Wikipedia Given Wikipedia’s strict enforcement of an NPOV policy, we decided to build the NPOV corpus, containing Wikipedia edits that are specifically designed to remove bias. Editors are encouraged to identify and rewrite biased passages to achieve a more neutral tone, and they can use several NPOV 1The data and bias lexicon we developed are available at http://www.mpi-sws.org/˜cristian/Biased_ language.html Data Articles Revisions Words Edits Sents Train 5997 2238K 11G 13807 1843 Dev 653 210K 0.9G 1261 163 Test 814 260K 1G 1751 230 Total 7464 2708K 13G 16819 2235 Table 1: Statistics of the NPOV corpus, extracted from Wikipedia. (Edits refers to bias-driven edits, i.e., with an NPOV comment. Sents refers to sentences with a one-word bias-driven edit.) tags to mark biased content.2 Articles tagged this way fall into Wikipedia’s category of NPOV disputes. We constructed the NPOV corpus by retrieving all articles that were or had been in the NPOVdispute category3 together with their full revision history. We used Stanford’s CoreNLP tools4 to tokenize and split the text into sentences. Table 1 shows the statistics of this corpus, which we split into training (train), development (dev), and test. Following Wikipedia’s terminology, we call each version of a Wikipedia article a revision, and so an article can be viewed as a set of (chronologically ordered) revisions. 2.2 Extracting Edits Meant to Remove Bias Given all the revisions of a page, we extracted the changes between pairs of revisions with the wordmode diff function from the Diff Match and Patch library.5 We refer to these changes between revisions as edits, e.g., McMansion > large home. An edit consists of two strings: the old string that is being replaced (i.e., the before form), and the new modified string (i.e., the after form). Our assumption was that among the edits happening in NPOV disputes, we would have a high density of edits intended to remove bias, which we call bias-driven edits, like (1) and (2) from Section 1. But many other edits occur even in NPOV disputes, including edits to fix spelling or grammatical errors, simplify the language, make the meaning more precise, or even vandalism (Max 2{{POV}}, {{POV-check}}, {{POV-section}}, etc. Adding these tags displays a template such as “The neutrality of this article is disputed. Relevant discussion may be found on the talk page. Please do not remove this message until the dispute is resolved.” 3http://en.wikipedia.org/wiki/ Category:All_NPOV_disputes 4http://nlp.stanford.edu/software/ corenlp.shtml 5http://code.google.com/p/google-diffmatch-patch 1651 and Wisniewski, 2010). Therefore, in order to extract a high-precision set of bias-driven edits, we took advantage of the comments that editors can associate with a revision—typically short and brief sentences describing the reason behind the revision. We considered as bias-driven edits those that appeared in a revision whose comment mentioned (N)POV, e.g., Attempts at presenting some claims in more NPOV way; or merging in a passage from the researchers article after basic NPOVing. We only kept edits whose before and after forms contained five or fewer words, and discarded those that only added a hyperlink or that involved a minimal change (character-based Levenshtein distance < 4). The final number of biasdriven edits for each of the data sets is shown in the “Edits” column of Table 1. 2.3 Linguistic Analysis Style guides talk about biased language in a prescriptive manner, listing a few words that should be avoided because they are flattering, vague, or endorse a particular point of view (Wikipedia, 2013a). Our focus is on analyzing actual biased text and bias-driven edits extracted from Wikipedia. As we suggested above, this analysis uncovered two major classes of bias: epistemological bias and framing bias. Table 2 shows the distribution (from a sample of 100 edits) of the different types and subtypes of bias presented in this section. (A) Epistemological bias involves propositions that are either commonly agreed to be true or commonly agreed to be false and that are subtly presupposed, entailed, asserted or hedged in the text. 1. Factive verbs (Kiparsky and Kiparsky, 1970) presuppose the truth of their complement clause. In (3-a) and (4-a), realize and reveal presuppose the truth of “the oppression of black people...” and “the Meditation technique produces...”, whereas (3-b) and (4-b) present the two propositions as somebody’s stand or an experimental result. (3) a. He realized that the oppression of black people was more of a result of economic exploitation than anything innately racist. b. His stand was that the oppression of black people was more of a result of economic exploitation than anything innately racist. (4) a. The first research revealed that the Meditation technique produces a unique state fact. b. The first research indicated that the Meditation technique produces a unique state fact. Bias Subtype % A. Epistemological bias 43 - Factive verbs 3 - Entailments 25 - Assertives 11 - Hedges 4 B. Framing bias 57 - Intensifiers 19 - One-sided terms 38 Table 2: Proportion of the different bias types. 2. Entailments are directional relations that hold whenever the truth of one word or phrase follows from another, e.g., murder entails kill because there cannot be murdering without killing (5). However, murder entails killing in an unlawful, premeditated way. This class includes implicative verbs (Karttunen, 1971), which imply the truth or untruth of their complement, depending on the polarity of the main predicate. In (6-a), coerced into accepting entails accepting in an unwilling way. (5) a. After he murdered three policemen, the colony proclaimed Kelly a wanted outlaw. b. After he killed three policemen, the colony proclaimed Kelly a wanted outlaw. (6) a. A computer engineer who was coerced into accepting a plea bargain. b. A computer engineer who accepted a plea bargain. 3. Assertive verbs (Hooper, 1975) are those whose complement clauses assert a proposition. The truth of the proposition is not presupposed, but its level of certainty depends on the asserting verb. Whereas verbs of saying like say and state are usually neutral, point out and claim cast doubt on the certainty of the proposition. (7) a. The “no Boeing” theory is a controversial issue, even among conspiracists, many of whom have pointed out that it is disproved by ... b. The “no Boeing” theory is a controversial issue, even among conspiracists, many of whom have said that it is disproved by... (8) a. Cooper says that slavery was worse in South America and the US than Canada, but clearly states that it was a horrible and cruel practice. b. Cooper says that slavery was worse in South America and the US than Canada, but points out that it was a horrible and cruel practice. 1652 4. Hedges are used to reduce one’s commitment to the truth of a proposition, thus avoiding any bold predictions (9-b) or statements (10-a).6 (9) a. Eliminating the profit motive will decrease the rate of medical innovation. b. Eliminating the profit motive may have a lower rate of medical innovation. (10) a. The lower cost of living in more rural areas means a possibly higher standard of living. b. The lower cost of living in more rural areas means a higher standard of living. Epistemological bias is bidirectional, that is, bias can occur because doubt is cast on a proposition commonly assumed to be true, or because a presupposition or implication is made about a proposition commonly assumed to be false. For example, in (7) and (8) above, point out is replaced in the former case, but inserted in the second case. If the truth of the proposition is uncontroversially accepted by the community (i.e., reliable sources, etc.), then the use of a factive is unbiased. In contrast, if only a specific viewpoint agrees with its truth, then using a factive is biased. (B) Framing bias is usually more explicit than epistemological bias because it occurs when subjective or one-sided words are used, revealing the author’s stance in a particular debate (Entman, 2007). 1. Subjective intensifiers are adjectives or adverbs that add (subjective) force to the meaning of a phrase or proposition. (11) a. Schnabel himself did the fantastic reproductions of Basquiat’s work. b. Schnabel himself did the accurate reproductions of Basquiat’s work. (12) a. Shwekey’s albums are arranged by many talented arrangers. b. Shwekey’s albums are arranged by many different arrangers. 2. One-sided terms reflect only one of the sides of a contentious issue. They often belong to controversial subjects (e.g., religion, terrorism, etc.) where the same event can be seen from two or more opposing perspectives, like the Israeli-Palestinian conflict (Lin et al., 2006). 6See Choi et al. (2012) for an exploration of the interface between hedging and framing. (13) a. Israeli forces liberated the eastern half of Jerusalem. b. Israeli forces captured the eastern half of Jerusalem. (14) a. Concerned Women for America’s major areas of political activity have consisted of opposition to gay causes, pro-life law... b. Concerned Women for America’s major areas of political activity have consisted of opposition to gay causes, anti-abortion law... (15) a. Colombian terrorist groups. b. Colombian paramilitary groups. Framing bias has been studied within the literature on stance recognition and arguing subjectivity. Because this literature has focused on identifying which side an article takes on a two-sided debate such as the Israeli-Palestinian conflict (Lin et al., 2006), most studies cast the problem as a two-way classification of documents or sentences into for/positive vs. against/negative (Anand et al., 2011; Conrad et al., 2012; Somasundaran and Wiebe, 2010), or into one of two opposing views (Yano et al., 2010; Park et al., 2011). The features used by these models include subjectivity and sentiment lexicons, counts of unigrams and bigrams, distributional similarity, discourse relationships, and so on. The datasets used by these studies come from genres that overtly take a specific stance (e.g., debates, editorials, blog posts). In contrast, Wikipedia editors are asked not to advocate a particular point of view, but to provide a balanced account of the different available perspectives. For this reason, overtly biased opinion statements such as “I believe that...” are not common in Wikipedia. The features used by the subjectivity literature help us detect framing bias, but we also need features that capture epistemological bias expressed through presuppositions and entailments. 3 Automatically Identifying Biased Language We now show how the bias cues identified in Section 2.3 can help solve a new task. Given a biased sentence (e.g., a sentence that a Wikipedia editor has tagged as violating the NPOV policy), our goal in this new task is to identify the word that introduces bias. This is part of a potential three-step process for detecting and correcting biased language: (1) finding biased phrases, (2) identifying the word that introduces the bias, (3) rewording to eliminate the bias. As we will see below, it can be 1653 hard even for humans to track down the sources of bias, because biases in reference works are often subtle and implicit. An automatic bias detector that can highlight the bias-inducing word(s) and draw the editors’ attention to words that need to be modified could thus be important for improving reference works like Wikipedia or even in news reporting. We selected the subset of sentences that had a single NPOV edit involving one (original) word. (Although the before form consists of only one word, the after form can be either one or more words or the null string (i.e., deletion edits); we do not use the after string in this identification task). The number of sentences in the train, dev and test sets is shown in the last column of Table 1. We trained a logistic regression model on a feature vector for every word that appears in the NPOV sentences from the training set, with the bias-inducing words as the positive class, and all the other words as the negative class. The features are described in the next section. At test time, the model is given a set of sentences and, for each of them, it ranks the words according to their probability to be biased, and outputs the highest ranked word (TOP1 model), the two highest ranked words (TOP2 model), or the three highest ranked words (TOP3 model). 3.1 Features The types of features used in the logistic regression model are listed in Table 3, together with their value space. The total number of features is 36,787. The ones targeting framing bias draw on previous work on sentiment and subjectivity detection (Wiebe et al., 2004; Liu et al., 2005). Features to capture epistemological bias are based on the bias cues identified in Section 2.3. A major split separates the features that describe the word under analysis (e.g., lemma, POS, whether it is a hedge, etc.) from those that describe its surrounding context (e.g., the POS of the word to the left, whether there is a hedge in the context, etc.). We define context as a 5-gram window, i.e., two words to the left of the word under analysis, and two to the right. Taking context into account is important given that biases can be context-dependent, especially epistemological bias since it depends on the truth of a proposition. To define some of the features like POS and grammatical relation, we used the Stanford’s CoreNLP tagger and dependency parser (de Marneffe et al., 2006). Features 9–10 use the list of hedges from Hyland (2005), features 11–14 use the factives and assertives from Hooper (1975), features 15–16 use the implicatives from Karttunen (1971), features 19–20 use the entailments from Berant et al. (2012), features 21–25 employ the subjectivity lexicon from Riloff and Wiebe (2003), and features 26–29 use the sentiment lexicon—positive and negative words—from Liu et al. (2005). If the word (or a word in the context) is in the lexicon, then the feature is true, otherwise it is false. We also included a “bias lexicon” (feature 31) that we built based on our NPOV corpus from Wikipedia. We used the training set to extract the lemmas of words that were the before form of at least two NPOV edits, and that occurred in at least two different articles. Of the 654 words included in this lexicon, 433 were unique to this lexicon (i.e., recorded in neither Riloff and Wiebe’s (2003) subjectivity lexicon nor Liu et al.’s (2005) sentiment lexicon) and represented many one-sided or controversial terms, e.g., abortion, same-sex, execute. Finally, we also included a “collaborative feature” that, based on the previous revisions of the edit’s article, computes the ratio between the number of times that the word was NPOV-edited and its frequency of occurrence. This feature was designed to capture framing bias specific to an article or topic. 3.2 Baselines Previous work on subjectivity and stance recognition has been evaluated on the task of classifying documents as opinionated vs. factual, for vs. against, positive vs. negative. Given that the task of identifying the bias-inducing word of a sentence is novel, there were no previous results to compare directly against. We ran the following five baselines. 1. Random guessing. Naively returns a random word from every sentence. 2. Role baseline. Selects the word with the syntactic role that has the highest probability to be biased, as computed on the training set. This is the parse tree root (probability p = .126 to be biased), followed by verbal arguments (p = .085), and the subject (p = .084). 1654 ID Feature Value Description 1* Word <string> Word w under analysis. 2 Lemma <string> Lemma of w. 3* POS {NNP, JJ, ...} POS of w. 4* POS – 1 {NNP, JJ, ...} POS of one word before w. 5 POS – 2 {NNP, JJ, ...} POS of two words before w. 6* POS + 1 {NNP, JJ, ...} POS of one word after w. 7 POS + 2 {NNP, JJ, ...} POS of two words after w. 8 Position in sentence {start, mid, end} Position of w in the sentence (split into three parts). 9 Hedge {true, false} w is in Hyland’s (2005) list of hedges (e.g., apparently). 10* Hedge in context {true, false} One/two words) around w is a hedge (Hyland, 2005). 11* Factive verb {true, false} w is in Hooper’s (1975) list of factives (e.g., realize). 12* Factive verb in context {true, false} One/two word(s) around w is a factive (Hooper, 1975). 13* Assertive verb {true, false} w is in Hooper’s (1975) list of assertives (e.g., claim). 14* Assertive verb in context {true, false} One/two word(s) around w is an assertive (Hooper, 1975). 15 Implicative verb {true, false} w is in Karttunen’s (1971) list of implicatives (e.g., manage). 16* Implicative verb in context {true, false} One/two word(s) around w is an implicative (Karttunen, 1971). 17* Report verb {true, false} w is a report verb (e.g., add). 18 Report verb in context {true, false} One/two word(s) around w is a report verb. 19* Entailment {true, false} w is in Berant et al.’s (2012) list of entailments (e.g., kill). 20* Entailment in context {true, false} One/two word(s) around w is an entailment (Berant et al., 2012). 21* Strong subjective {true, false} w is in Riloff and Wiebe’s (2003) list of strong subjectives (e.g., absolute). 22 Strong subjective in context {true, false} One/two word(s) around w is a strong subjective (Riloff and Wiebe, 2003). 23* Weak subjective {true, false} w is in Riloff and Wiebe’s (2003) list of weak subjectives (e.g., noisy). 24* Weak subjective in context {true, false} One/two word(s) around w is a weak subjective (Riloff and Wiebe, 2003). 25 Polarity {+, –, both, ...} The polarity of w according to Riloff and Wiebe (2003), e.g., praising is positive. 26* Positive word {true, false} w is in Liu et al.’s (2005) list of positive words (e.g., excel). 27* Positive word in context {true, false} One/two word(s) around w is positive (Liu et al., 2005). 28* Negative word {true, false} w is in Liu et al.’s (2005) list of negative words (e.g., terrible). 29* Negative word in context {true, false} One/two word(s) around w is negative (Liu et al., 2005). 30* Grammatical relation {root, subj, ...} Whether w is the subject, object, root, etc. of its sentence. 31 Bias lexicon {true, false} w has been observed in NPOV edits (e.g., nationalist). 32* Collaborative feature <numeric> Number of times that w was NPOV-edited in the article’s prior history / frequency of w. Table 3: Features used by the bias detector. The star (*) shows the most contributing features. 3. Sentiment baseline. Logistic regression model that only uses the features based on Liu et al.’s (2005) lexicons of positive and negative words (i.e., features 26–29). 4. Subjectivity baseline. Logistic regression model that only uses the features based on Riloff and Wiebe’s (2003) lexicon of subjective words (i.e., features 21–25). 5. Wikipedia baseline. Selects as biased the words that appear in Wikipedia’s list of words to avoid (Wikipedia, 2013a). These baselines assessed the difficulty of the task, as well as the extent to which traditional sentiment-analysis and subjectivity features would suffice to detect biased language. 3.3 Results and Discussion To measure performance, we used accuracy defined as: #sentences with the correctly predicted biased word #sentences The results are shown in Table 4. As explained earlier, we evaluated all the models by outputting as biased either the highest ranked word or the two or three highest ranked words. These correspond to the TOP1, TOP2 and TOP3 columns, respectively. The TOP3 score increases to 59%. A tool that highlights up to three words to be revised would simplify the editors’ job and decrease significantly the time required to revise. Our model outperforms all five baselines by a large margin, showing the importance of considering a wide range of features. Wikipedia’s list of words to avoid falls very short on recall. Fea1655 System TOP1 TOP2 TOP3 Baseline 1: Random 2.18 7.83 9.13 Baseline 2: Role 15.65 20.43 25.65 Baseline 3: Sentiment 14.78 22.61 27.83 Baseline 4: Subjectivity 16.52 25.22 33.91 Baseline 5: Wikipedia 10.00 10.00 10.00 Our system 34.35 46.52 58.70 Humans (AMT) 37.39 50.00 59.13 Table 4: Accuracy (%) of the bias detector on the test set. tures that contribute the most to the model’s performance (in a feature ablation study on the dev set) are highlighted with a star (*) in Table 3. In addition to showing the importance of linguistic cues for different classes of bias, the ablation study highlights the role of contextual features. The bias lexicon does not seem to help much, suggesting that it is overfit to the training data. An error analysis shows that our system makes acceptable errors in that words wrongly predicted as bias-inducing may well introduce bias in a different context. In (16), the system picked eschew, whereas orthodox would have been the correct choice according to the gold edit. Note that both the sentiment and the subjectivity lexicons list eschew as a negative word. The bias type that poses the greatest challenge to the system are terms that are one-sided or loaded in a particular topic, such as orthodox in this example. (16) a. Some Christians eschew orthodox theology; such as the Unitarians, Socinian, [...] b. Some Christians eschew mainstream trinitarian theology; such as the Unitarians, Socinian, [...] The last row in Table 4 lists the performance of humans on the same task, presented in the next section. 4 Human Perception of Biased Language Is it difficult for humans to find the word in a sentence that induces bias, given the subtle, often implicit biases in Wikipedia. We used Amazon Mechanical Turk7 (AMT) to elicit annotations from humans for the same 230 sentences from the test set that we used to evaluate the bias detector in Section 3.3. The goal of this annotation was twofold: to compare the performance of our bias detector against a human baseline, and to assess the difficulty of this task for humans. While AMT labelers are not trained Wikipedia editors, under7http://www.mturk.com standing how difficult these cases are for untrained labelers is an important baseline. 4.1 Task Our HIT (Human Intelligence Task) was called “Find the biased word!”. We kept the task description succinct. Turkers were shown Wikipedia’s definition of a “biased statement” and two example sentences that illustrated the two types of bias, framing and epistemological. In each HIT, annotators saw 10 sentences, one after another, and each one followed by a text box entitled “Word introducing bias.” For each sentence, they were asked to type in the text box the word that caused the statement to be biased. They were only allowed to enter a single word. Before the 10 sentences, turkers were asked to list the languages they spoke as well as their primary language in primary school. This was English in all the cases. In addition, we included a probe question in the form of a paraphrasing task: annotators were given a sentence and two paraphrases (a correct and a bad one) to choose from. The goal of this probe question was to discard annotators who were not paying attention or did not have a sufficient command of English. This simple test was shown to be effective in verifying and eliciting linguistic attentiveness (Munro et al., 2010). This was especially important in our case as we were interested in using the human annotations as an oracle. At the end of the task, participants were given the option to provide additional feedback. We split the 230 sentences into 23 sets of 10 sentences, and asked for 10 annotations of each set. Each approved HIT was rewarded with $0.30. 4.2 Results and Discussion On average, it took turkers about four minutes to complete each HIT. The feedback that we got from some of them confirmed our hypothesis that finding the bias source is difficult: “Some of the ‘biases’ seemed very slight if existent at all,” “This was a lot harder than I thought it would be... Interesting though!”. We postprocessed the answers ignoring case, punctuation signs, and spelling errors. To ensure an answer quality as high as possible, we only kept those turkers who answered attentively by applying two filters: we only accepted answers that matched a valid word from the sentence, and we discarded answers from participants who did not 1656 2 3 4 5 6 7 8 9 10 Number of times the top word was selected Number of sentences 0 10 20 30 40 50 Figure 1: Distribution of the number of turkers who selected the top word (i.e., the word selected by the majority of turkers). pass the paraphrasing task—there were six such cases. These filters provided us with confidence in the turkers’ answers as a fair standard of comparison. Overall, humans correctly identified the biased word 30% of the time. For each sentence, we ranked the words according to the number of turkers (out of 10) who selected them and, like we did for the automated system, we assessed performance when considering only the top word (TOP1), the top 2 words (TOP2), and the top 3 words (TOP3). The last row of Table 4 reports the results. Only 37.39% of the majority answers coincided with the gold label, slightly higher than our system’s accuracy. The fact that the human answers are very close to the results of our system reflects the difficulty of the task. Biases in reference works can be very subtle and go unnoticed by humans; automated systems could thus be extremely helpful. As a measure of inter-rater reliability, we computed pairwise agreement. The turkers agreed 40.73% of the time, compared to the 5.1% chance agreement that would be achieved if raters had randomly selected a word for each sentence. Figure 1 plots the number of times the top word of each sentence was selected. The bulk of the sentences only obtained between four and six answers for the same word. There is a good amount of overlap (∼34%) between the correct answers predicted by our system and those from humans. Much like the automated system, humans also have the hardest time identifying words that are one-sided or controversial to a specific topic. They also picked eschew for (16) instead of orthodox. Compared to the system, they do better in detecting bias-inducing intensifiers, and about the same with epistemological bias. 5 Related Work The work in this paper builds upon prior work on subjectivity detection (Wiebe et al., 2004; Lin et al., 2011; Conrad et al., 2012) and stance recognition (Yano et al., 2010; Somasundaran and Wiebe, 2010; Park et al., 2011), but applied to the genre of reference works such as Wikipedia. Unlike the blogs, online debates and opinion pieces which have been the major focus of previous work, bias in reference works is undesirable. As a result, the expression of bias is more implicit, making it harder to detect by both computers and humans. Of the two classes of bias that we uncover, framing bias is indeed strongly linked to subjectivity, but epistemological bias is not. In this respect, our research is comparable to Greene and Resnik’s (2009) work on identifying implicit sentiment or perspective in journalistic texts, based on semantico-syntactic choices. Given that the data that we use is not supposed to be opinionated, our task consists in detecting (implicit) bias instead of classifying into side A or B documents about a controversial topic like ObamaCare (Conrad et al., 2012) or the IsraeliPalestinian conflict (Lin et al., 2006; Greene and Resnik, 2009). Our model detects whether all the relevant perspectives are fairly represented by identifying statements that are one-sided. To this end, the features based on subjectivity and sentiment lexicons turn out to be helpful, and incorporating more features for stance detection is an important direction for future work. Other aspects of Wikipedia structure have been used for other NLP applications. The Wikipedia revision history has been used for spelling correction, text summarization (Nelken and Yamangil, 2008), lexical simplification (Yatskar et al., 2010), paraphrasing (Max and Wisniewski, 2010), and textual entailment (Zanzotto and Pennacchiotti, 2010). Ganter and Strube (2009) have used Wikipedia’s weasel-word tags to train a hedge detector. Callahan and Herring (2011) have examined cultural bias based on Wikipedia’s NPOV policy. 1657 6 Conclusions Our study of bias in Wikipedia has implications for linguistic theory and computational linguistics. We show that bias in reference works falls broadly into two classes, framing and epistemological. The cues to framing bias are more explicit and are linked to the literature on subjectivity; cues to epistemological bias are subtle and implicit, linked to presuppositions and entailments in the text. Epistemological bias has not received much attention since it does not play a major role in overtly opinionated texts, the focus of much research on stance recognition. However, our logistic regression model reveals that epistemological and other features can usefully augment the traditional sentiment and subjectivity features for addressing the difficult task of identifying the biasinducing word in a biased sentence. Identifying the bias-inducing word is a challenging task even for humans. Our linguisticallyinformed model performs nearly as well as humans tested on the same task. Given the subtlety of some of these biases, an automated system that highlights one or more potentially biased words would provide a helpful tool for editors of reference works and news reports, not only making them aware of unnoticed biases but also saving them hours of time. Future work could investigate the incorporation of syntactic features or further features from the stance detection literature. Features from the literature on veridicality (de Marneffe et al., 2012) could be informative of the writer’s commitment to the truth of the events described, and document-level features could help assess the extent to which the article provides a balanced account of all the facts and points of view. Finally, the NPOV data and the bias lexicon that we release as part of this research could prove useful in other bias related tasks. Acknowledgments We greatly appreciate the support of Jean Wu and Christopher Potts in running our task on Amazon Mechanical Turk, and all the Amazon Turkers who participated. We benefited from comments by Valentin Spitkovsky on a previous draft and from the helpful suggestions of the anonymous reviewers. The first author was supported by a Beatriu de Pin´os postdoctoral scholarship (2010 BP-A 00149) from Generalitat de Catalunya. The second author was supported by NSF IIS-1016909. The last author was supported by the Center for Advanced Study in the Behavioral Sciences at Stanford. References Pranav Anand, Marilyn Walker, Rob Abbott, Jean E. Fox Tree, Robeson Bowmani, and Michael Minor. 2011. Cats rule and dogs drool!: Classifying stance in online debate. In Proceedings of ACLHLT 2011 Workshop on Computational Approaches to Subjectivity and Sentiment Analysis, pages 1–9. Jonathan Berant, Ido Dagan, Meni Adler, and Jacob Goldberger. 2012. Efficient tree-based approximation for entailment graph learning. In Proceedings of ACL 2012, pages 117–125. Ewa Callahan and Susan C. Herring. 2011. Cultural bias in Wikipedia articles about famous persons. Journal of the American Society for Information Science and Technology, 62(10):1899–1915. Eunsol Choi, Chenhao Tan, Lillian Lee, Cristian Danescu-Niculescu-Mizil, and Jennifer Spindel. 2012. Hedge detection as a lens on framing in the GMO debates: a position paper. In Proceedings of the ACL-2012 Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics, pages 70–79. Alexander Conrad, Janyce Wiebe, and Rebecca Hwa. 2012. Recognizing arguing subjectivity and argument tags. In Proceedings of ACL-2012 Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics, pages 80–88. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC 2006. Marie-Catherine de Marneffe, Christopher D. Manning, and Christopher Potts. 2012. Did it happen? The pragmatic complexity of veridicality assessment. Computational Linguistics, 38(2):301– 333. Robert M. Entman. 2007. Framing bias: Media in the distribution of power. Journal of Communication, 57(1):163–173. Viola Ganter and Michael Strube. 2009. Finding hedges by chasing weasels: Hedge detection using Wikipedia tags and shallow linguistic features. In Proceedings of ACL-IJCNLP 2009, pages 173–176. Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of NAACL-HLT 2009, pages 503– 511. 1658 Joan B. Hooper. 1975. On assertive predicates. In J. Kimball, editor, Syntax and Semantics, volume 4, pages 91–124. Academic Press, New York. Ken Hyland. 2005. Metadiscourse: Exploring Interaction in Writing. Continuum, London and New York. Lauri Karttunen. 1971. Implicative verbs. Language, 47(2):340–358. Paul Kiparsky and Carol Kiparsky. 1970. Fact. In M. Bierwisch and K. E. Heidolph, editors, Progress in Linguistics, pages 143–173. Mouton, The Hague. Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on? Identifying perspectives at the document and sentence levels. In Proceedings of CoNLL 2006, pages 109–116. Chenghua Lin, Yulan He, and Richard Everson. 2011. Sentence subjectivity detection with weakly-supervised learning. In Proceedings of AFNLP 2011, pages 1153–1161. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion Observer: analyzing and comparing opinions on the Web. In Proceedings of WWW 2005, pages 342–351. Aur´elien Max and Guillaume Wisniewski. 2010. Mining naturally-occurring corrections and paraphrases from Wikipedia’s revision history. In Proceedings of LREC 2010, pages 3143–3148. Robert Munro, Steven Bethard, Victor Kuperman, Vicky Tzuyin Lai, Robin Melnick, Christopher Potts, Tyler Schnoebelen, and Harry Tily. 2010. Crowdsourcing and language studies: the new generation of linguistic data. In Proceedings of the NAACL-HLT 2010 Workshop on Creating Speech and Language Data With Amazons Mechanical Turk, pages 122–130. Rani Nelken and Elif Yamangil. 2008. Mining Wikipedias article revision history for training Computational Linguistics algorithms. In Proceedings of the 1st AAAI Workshop on Wikipedia and Artificial Intelligence. Souneil Park, KyungSoon Lee, and Junehwa Song. 2011. Contrasting opposing views of news articles on contentious issues. In Proceedings of ACL 2011, pages 340–349. Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proceedings of EMNLP 2003, pages 105–112. Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL-HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116–124. Peter D. Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of ACL 2002, pages 417–424. Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Computational Linguistics, 30(3):277–308. Wikipedia. 2013a. Wikipedia: Manual of style / Words to watch. http://en.wikipedia.org/ wiki/Wikipedia:Words_to_avoid. [Retrieved February 5, 2013]. Wikipedia. 2013b. Wikipedia: Neutral point of view. http:http://en.wikipedia.org/wiki/ Wikipedia:Neutral_point_of_view. [Retrieved February 5, 2013]. Tae Yano, Philip Resnik, and Noah A. Smith. 2010. Shedding (a thousand points of) light on biased language. In Proceedings of the NAACL-HLT 2010 Workshop on Creating Speech and Language Data With Amazons Mechanical Turk, pages 152–158. Mark Yatskar, Bo Pang, Cristian Danescu-NiculescuMizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia. In Proceedings of NAACLHLT 2010, pages 365–368. Fabio M. Zanzotto and Marco Pennacchiotti. 2010. Expanding textual entailment corpora from Wikipedia using co-training. In Proceedings of the 2nd Coling Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources, pages 28–36. 1659
2013
162
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1660–1668, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Evaluating a City Exploration Dialogue System Combining Question-Answering and Pedestrian Navigation Srinivasan Janarthanam1, Oliver Lemon1, Phil Bartie2, Tiphaine Dalmas2, Anna Dickinson2, Xingkun Liu1, William Mackaness2, and Bonnie Webber2 1 The Interaction Lab, Heriot-Watt University 2 Edinburgh University [email protected] Abstract We present a city navigation and tourist information mobile dialogue app with integrated question-answering (QA) and geographic information system (GIS) modules that helps pedestrian users to navigate in and learn about urban environments. In contrast to existing mobile apps which treat these problems independently, our Android app addresses the problem of navigation and touristic questionanswering in an integrated fashion using a shared dialogue context. We evaluated our system in comparison with Samsung S-Voice (which interfaces to Google navigation and Google search) with 17 users and found that users judged our system to be significantly more interesting to interact with and learn from. They also rated our system above Google search (with the Samsung S-Voice interface) for tourist information tasks. 1 Introduction We present a mobile dialogue system (an Android app) called Spacebook that addresses the problem of pedestrian navigation and tourist information in urban environments. There has been little prior work that addresses these two problems - navigation and tourist information provision - in an integrated way. By navigation, we refer to the problem of finding appropriate destinations to go to and the task of wayfinding to reach them and by tourist information provision we refer to the problem of meeting the informational needs of a user about entities such as museums, statues and famous personalities. A dialogue system such as this could serve as a personal tour guide to pedestrian tourists as they walk around unknown cities. With the proliferation of smartphones, there has been a number of mobile apps developed to address these problems. However these apps have the following problems: first, they demand the user’s visual attention because they predominantly present information on a mobile screen. This can be dangerous in urban environments, as well as being distracting. Second, these apps address the problems of navigation and tourist information independently and therefore do not have a shared interaction context. This means that users cannot switch between information and navigation tasks in a natural and fluid manner. User1: Take me to the National Museum. System2: The National Museum is about 300m away.. System3: At the KFC, turn left on to South Bridge System4 : Near you is the statue of David Hume. User2: Who is David Hume. System5: David Hume was a Scottish philosopher.... User3: Tell me more about David Hume. System6: He was one of the most important figures in.. System7: You should be able to see the museum ... User4: Tell me more about the museum. System8: The National Museum of Scotland is a.... Table 1: An example interaction with the evaluated system In contrast to many existing mobile apps, Spacebook has a speech-only interface and addresses both problems in an integrated way. We conjecture that with a speech-only interface, users can immerse themselves in exploring the city, and that because of the shared context they can switch between navigation and tourist information tasks more easily. Using the navigational context, Spacebook pushes point-of-interest information which can then initiate tourist information tasks using the QA module. Table 1 presents an example interaction with our system showing the integrated use of navigation and question-answering capabil1660 ities. Utterances System4-8 show the system’s capability to push information about nearby pointsof-interest (PoI) during a navigation task and answer followup questions using the QA system (in utterances User2 and User3). The final 3 utterances show a natural switch between navigation to an entity and QA about that entity. We investigate whether our system using a combination of geographical information system (GIS) and natural language processing (NLP) technologies would be a better companion to pedestrian city explorers than the current state-of-the-art mobile apps. We hypothesize that, (1) users will find our speech-only interface to navigation efficient as it allows them to navigate without having to repeatedly look at a map and (2), that users will find a dialogue interface which integrates touristic question-answering and navigation within a shared context to be useful for finding information about entities in the urban environment. We first present some related work in section 2. We describe the architecture of the system in section 3. We then present our experimental design, results and analysis in sections 5, 6 and 7. 2 Related work Mobile apps such as Siri, Google Maps Navigation, Sygic, etc. address the problem of navigation while apps like Triposo, Guidepal, Wikihood, etc. address the problem of tourist information by presenting the user with descriptive information about various points of interest (PoI) in the city. While some exploratory apps present snippets of information about a precompiled list of PoIs, other apps dynamically generate a list of PoIs arranged based on their proximity to the users. Users can also obtain specific information about PoIs using Search apps. Also, since these navigation and exploratory/search apps do not address both problems in an integrated way, users need to switch between them and therefore lose interaction context. While most apps address these two problems independently, some like Google Now, Google Field Trip, etc, mix navigation with exploration. But such apps present information primarily visually on the screen for the user to read. Some of these are available for download at the Google Play Android app store1. Several dialogue and natural language systems have addressed the issue 1https://play.google.com/store of pedestrian navigation (Malaka and Zipf, 2000; Raubal and Winter, 2002; Dale et al., 2003; Bartie and Mackaness, 2006; Shroder et al., 2011; Dethlefs and Cuay´ahuitl, 2011). There has also been recent interest in shared tasks for generating navigation instructions in indoor and urban environments (Byron et al., 2007; Janarthanam and Lemon, 2011). Some dialogue systems deal with presenting information concerning points of interest (Ko et al., 2005; Kashioka et al., 2011) and interactive question answering (Webb and Webber, 2009). In contrast, Spacebook has the objective of keeping the user’s cognitive load low and preventing users from being distracted (perhaps dangerously so) from walking in the city (Kray et al., 2003). Also, it allows users to interleave the two sub-tasks seamlessly and can keep entities discussed in both tasks in shared context (as shown in Table 1). 3 Architecture The architecture of the Spacebook system is shown in figure 1. Our architecture brings together Spoken Dialogue Systems (SDS), Geographic Information Systems (GIS) and QuestionAnswering (QA) technologies (Janarthanam et al., 2012). Its essentially a spoken dialogue system (SDS) consisting of an automatic speech recogniser (ASR), a semantic parser, an Interaction Manager, an utterance generator and a text-tospeech synthesizer (TTS). The GIS modules in this architecture are the City Model, the Visibility Engine, and the Pedestrian tracker. Users communicate with the system using a smartphone-based client app (an Android app) that sends users’ position, pace rate, and spoken utterances to the system, and delivers synthesised system utterances to the user. Figure 1: System Architecture 1661 3.1 Dialogue interface The dialogue interface consists of a speech recognition module, an utterance parser, an interaction manager, an utterance generator and a speech synthesizer. The Nuance 9 speech recogniser with a domain specific language model was used for speech recognition. The recognised speech is currently parsed using a rule-based parser into dialogue acts and semantic content. The Interaction Manager (IM) is the central component of this architecture, which provides the user with navigational instructions, pushes PoI information and manages QA questions. It receives the user’s input in the form of a dialogue act (DA), the user’s location (latitude and longitude) and pace rate. Based on these inputs and the dialogue context, it responds with system output dialogue act, based on a dialogue policy. The IM initiates the conversation with a calibration phase where the user’s initial location and orientation are obtained. The user can then initiate tasks that interest him/her. These tasks include searching for an entity (e.g. a museum or a restaurant), requesting navigation instructions to a destination, asking questions about the entities in the City Model, and so on. When the user is mobile, the IM identifies points of interest2 on the route proximal to the user. We call this “PoI push”. The user is encouraged to ask for more information if he/she is interested. The system also answers adhoc questions from the user (e.g. “Who is David Hume?”, “What is the Old College?”, etc) (see section 3.4). Navigation instructions are given in-situ by observing user’s position continuously, in relation to the next node (street junction) on the current planned route, and they are given priority if in conflict with a PoI push at the same time. Navigation instructions use landmarks near route nodes whenever possible (e.g. “When you reach Clydesdale Bank , keep walking forward”). The IM also informs when users pass by recognisable landmarks, just to reassure them that they are on track (e.g. “You will pass by Tesco on the right”). In addition to navigation instructions, the IM also answers users’ questions concerning the route, his/her location, and location of and distance to the various entities. Finally, the IM uses the city model’s Visibility Engine (VE) to determine whether the destination is visible to the user (see section 3.3). 2Using high scoring ones when there are many, based on tourist popularity ratings in the City Model. The shared spatial and dialogue context employs a feature-based representation which is updated every 1 second (for location), and after every dialogue turn. Spatial context such as the user’s coordinates, street names, PoIs and landmarks proximal to the user, etc are used by PoI pushing and navigation. The dialogue context maintains the history of landmarks and PoIs pushed, latest entities mentioned, etc to resolve anaphoric references in navigation and QA requests, and to deliver coherent dialogue responses. The IM resolves anaphoric references by keeping a record of entities mentioned in the dialogue context. It also engages in clarification sub-dialogues when the speech recognition confidence scores are low. The IM stores the name and type information for each entity (such as landmark, building, etc) mentioned in navigation instructions and PoI pushes. Subsequent references to these entities using expressions such as “the museum”, “the cafe” etc are resolved by searching for the latest entity of the given type. Pronouns are resolved to the last mentioned entity. The IM also switches between navigation, PoI push, and QA tasks in an intelligent manner by using the shared context to prioritise its utterances from these different tasks. The utterance generator is a Natural Language Generation module that translates the system DA into surface text which is converted into speech using the Cereproc Text-toSpeech Synthesizer using a Scottish female voice. The only changes made were minor adjustments to the pronunciation of certain place names. 3.2 Pedestrian tracker Urban environments can be challenging with limited sky views, and hence limited line of sight to satellites, in deep urban corridors. There is therefore significant uncertainty about the user’s true location reported by GNSS sensors on smartphones (Zandbergen and Barbeau, 2011). This module improves on the reported user position by combining smartphone sensor data (e.g. accelerometer) with map matching techniques, to determine the most likely location of the pedestrian (Bartie and Mackaness, 2012). 3.3 City Model The City Model is a spatial database containing information about thousands of entities in the city of Edinburgh (Bartie and Mackaness, 2013). This data has been collected from a variety of exist1662 ing resources such as Ordnance Survey, OpenStreetMap, Google Places, and the Gazetteer for Scotland. It includes the location, use class, name, street address, and where relevant other properties such as build date and tourist ratings. The model also includes a pedestrian network (streets, pavements, tracks, steps, open spaces) which is used by an embedded route planner to calculate minimal cost routes, such as the shortest path. The city model also consists of a Visibility Engine that identifies the entities that are in the user’s vista space (Montello, 1993). To do this it accesses a digital surface model, sourced from LiDAR, which is a 2.5D representation of the city including buildings, vegetation, and land surface elevation. The Visibility Engine uses this dataset to offer a number of services, such as determining the line of sight from the observer to nominated points (e.g. which junctions are visible), and determining which entities within the city model are visible. Using these services, the IM determines if the destination is visible or not. 3.4 Question-Answering server The QA server currently answers a range of definition and biographical questions such as, “Tell me more about the Scottish Parliament”, “Who was David Hume?”, “What is haggis?”, and requests to resume (eg. “Tell me more”). QA is also capable of recognizing out of scope requests, that is, either navigation-related questions that can be answered by computations from the City Model and dealt with elsewhere in the system (“How far away is the Scottish Parliament?”, “How do I get there?”), or exploration queries that cannot be handled yet (“When is the cannon gun fired from the castle?”). Question classification is entirely machine learning-based using the SMO algorithm (Keerthi et al., 1999) trained over 2013 annotated utterances. Once the question has been typed, QA proceeds to focus detection also using machine learning techniques (Mikhailsian et al., 2009). Detected foci include possibly anaphoric expressions (“Who was he?”, “Tell me more about the castle”). These expressions are resolved against the dialogue history and geographical context. QA then proceeds to a textual search on texts from the Gazetteer of Scotland (Gittings, 2012) and Wikipedia, and definitions from WordNet glosses. The task is similar to TAC KBP 2013 Entity Linking Track and named entity disambiguation (Cucerzan, 2007). Candidate answers are reranked using a trained confidence score with the top candidate used as the final answer. These are usually long, descriptive answers and are provided as a flow of sentence chunks that the user can interrupt (see table 2). The Interaction Manager queries the QA model and pushes information when a salient PoI is in the vicinity of the user. “Edinburgh’s most famous and historic thoroughfare, which has formed the heart of the Old Town since mediaeval times. The Royal Mile includes Castlehill, the Lawnmarket, the Canongate and the Abbey Strand, but, is officially known simply as the High Street.” Table 2: QA output: query on “Royal Mile” 3.5 Mobile client The mobile client app, installed on an Android smartphone (Samsung Galaxy S3), connects the user to the dialogue system using a 3G data connection. The client senses the user’s location using positioning technology using GNSS satellites (GPS and GLONASS) which is sent to the dialogue system at the rate of one update every two seconds. It also sends pace rate of the user from the accelerometer sensor. In parallel, the client also places a phone call using which the user communicates with the dialogue system. 4 Baseline system The baseline system chosen for evaluation was Samsung S-Voice, a state-of-the-art commercial smartphone speech interface. S-Voice is a Samsung Android mobile phone app that allows a user to use the functionalities of device using a speech interface. For example, the user can say “Call John” and it will dial John from the user’s contacts. It launches the Google Navigation app when users request directions and it activates Google Search for open ended touristic information questions. The Navigation app is capable of providing instructions in-situ using speech. We used the SVoice system for comparison because it provided an integrated state-of-the-art interface to use both a navigation app and also an information-seeking app using the same speech interface. Users were encouraged to use these apps using speech but were allowed to use the GUI interface when using speech wasn’t working (e.g. misrecognition of local names). Users obtained the same kind of in1663 formation (i.e. navigation directions, descriptions about entities such as people, places, etc) from the baseline system as they would from our system. However, our system interacted with the user using the speech modality only. 5 Experimental design Spacebook and the baseline were evaluated in the summer of 2012. We evaluated both systems with 17 subjects in the streets of Edinburgh. There were 11 young subjects (between 20 and 26 years, mean=22 ± 2) and 6 older subjects (between 50 and 71 years, mean=61 ± 11). They were mostly native English speakers (88%). 59% of the users were regular smartphone users and their mean overall time spent in the city was 76 months. The test subjects had no previous experience with the proposed system. They were recruited via email adverts and mail shots. Subjects were given a task sheet with 8 tasks in two legs (4 tasks per leg). These tasks included both navigation and tourist information tasks (see table 3). Subjects used our system for one of the legs and the baseline system for the other and the order was balanced. Each leg took up to 30 mins to finish and the total duration including questionnaires was about 1.5 hours. Figure 2 shows the route taken by the subjects. The route is about 1.3 miles long. Subjects were followed by the evaluator who made notes on their behaviour (e.g. subject looks confused, subject looks at or manipulates the phone, subject looks around, etc). Subjects filled in a demographic questionnaire prior to the experiment. After each leg, they filled in a system questionnaire (see appendix) rating their experience. After the end of the experiment, they filled out a comparative questionnaire and were debriefed. They were optionally asked to elaborate on their questionnaire ratings. Users were paid £20 after the experiment was over. 6 Results Subjects were asked to identify tasks that they thought were successfully completed. The perceived task success rates of the two systems were compared for each task using the Chi square test. The results show that there is no statistically significant difference between the two systems in terms of perceived task success although the baseline system had a better task completion rate in tasks 1-3, 5 and 6. Our system performed better in Figure 2: Task route tourist information tasks (4, 7) (see table 4). Task Our system Baseline p T1 (N) 77.7 100 0.5058 T2 (TI) 88.8 100 0.9516 T3 (N) 100 100 NA T4 (TI) 100 87.5 0.9516 T5 (N+TI) 62.5 100 0.1654 T6 (N+TI) 87.5 100 0.9516 T7 (TI) 100 55.5 0.2926 T8 (N) 75.0 88.8 0.9105 Table 4: % Perceived Task success - task wise comparison (N - navigation task, TI - Tourist Information task) The system questionnaires that were filled out by users after each leg were analysed. These consisted of questions concerning each system to be rated on a six point Likert scale (1-Strongly Disagree, 2-Disagree, 3-Somewhat Disagree, 4Somewhat Agree, 5-Agree, 6-Strongly Agree). The responses were paired and tested using a Wilcoxon Sign Rank test. Median and Mode for each system and significance in differences are shown in table 5. Results show that although our system is not performing significantly better than the baseline system (SQ1-SQ10 except SQ7), users seem to find it more understanding (SQ7) and more interesting to interact with (SQ11) than the baseline. We grouped the subjects by age group and tested their responses. We found that the young subjects (age group 20-26), also felt that 1664 Leg 1 (Task 1) Ask the system to guide you to the Red Fort restaurant. (Task 2) You’ve heard that Mary Queen of Scots lived in Edinburgh. Find out about her. (Task 3) Walk to the university gym. (Task 4) Near the gym there is an ancient wall with a sign saying “Flodden Wall”. Find out what that is. Leg 2 (Task 5) Try to find John Knox House and learn about the man. (Task 6) Ask the system to guide you to the Old College. What can you learn about this building? (Task 7) Try to find out more about famous Edinburgh people and places, for example, David Hume, John Napier, and Ian Rankin. Try to find information about people and places that you are personally interested in or that are related to what you see around you. (Task 8) Ask the system to guide you back to the Informatics Forum. Table 3: Tasks for the user they learned something new about the city using it (SQ12) (p < 0.05) while the elderly (age group 50-71) didn’t. We also found statistically significant differences in smartphone users rating for our system on their learning compared to the baseline (SQ12). Subjects were also asked to choose between the two systems given a number of requirements such as ease of use, use for navigation, tourist information, etc. There was an option to rank the systems equally (i.e. a tie). They were presented with the same requirements as the system questionnaire with one additional question - “Overall which system do you prefer?” (CQ0). Users’ choice of system based on a variety of requirements is shown in table 6. Users’ choice counts were tested using Chi-square test. Significant differences were found in users’ choice of system for navigation and tourist information requirements. Users preferred the baseline system for navigation (CQ2) and our system for touristic information (CQ3) on the city. Although there was a clear choice of systems based on the two tasks, there was no significant preference of one system over the other overall (CQ0). They chose our system as the most interesting system to interact with (CQ11) and that it was more informative than the baseline (CQ12). Figure 3 shows the relative frequency between user choices on comparative questions. 7 Analysis Users found it somewhat difficult to navigate using Spacebook (see comments in table 7). Although the perceived task success shows that our system was able to get the users to their destination and there was no significant difference between the two systems based on their questionnaire response on navigation, they pointed out a number of issues and suggested a number of modifications. Many Figure 3: Responses to comparative questions users noted that a visual map and the directional arrow in the baseline system was helpful for navigation. In addition, they noted that our system’s navigation instructions were sometimes not satisfactory. They observed that there weren’t enough instructions coming from the system at street junctions. They needed more confirmatory utterances (that they are walking in the right direction) (5 users) and quicker recovery and notification when walking the wrong way (5 users). They observed that the use of street names was confusing sometimes. Some users also wanted a route summary before the navigation instructions are given. The problem with Spacebook’s navigation policy was that it did not, for example, direct the user via easily visible landmarks (e.g. “Head towards the Castle”), and relies too much on street names. Also, due to the latency in receiving GPS information, the IM sometimes did not present instructions soon enough during evaluation. Sometimes it received erroneous GPS information and therefore got the user’s orientation wrong. These problems will be addressed in the future version. Some users did find navigation instructions useful because of the use of proximal landmarks such 1665 Question B Mode B Median S Mode S Median p SQ1 - Ease of use 4 4 5 4 0.8207 SQ2 - Navigation 4 4 5 4 0.9039 SQ3 - Tourist Information 2 3 4 4 0.07323 SQ4 - Easy to understand 5 5 5 5 0.7201 SQ5 - Useful messages 5 4 5 4 1 SQ6 - Response time 5 5 2 2 0.2283 SQ7 - Understanding 3 3 5 4 0.02546 SQ8 - Repetitive 2 3 2 3 0.3205 SQ9 - Aware of user environment 5 5 4 4 0.9745 SQ10 - Cues for guidance 5 5 5 5 0.1371 SQ11 - Interesting to interact with 5 4 5 5 0.01799 SQ12 - Learned something new 5 4 5 5 0.08942 Table 5: System questionnaire responses (B=Baseline, S=our system) Task Baseline Our system Tie pPreferred Preferred value CQ0 23.52 35.29 41.17 0.66 CQ1 35.29 29.41 35.29 0.9429 CQ2 64.70 0 35.29 0.004 CQ3 17.64 64.70 17.64 0.0232 CQ4 35.29 29.41 23.52 0.8187 CQ5 23.52 52.94 23.52 0.2298 CQ6 23.52 29.41 35.29 0.8187 CQ7 17.64 47.05 35.29 0.327 CQ8 29.41 23.52 47.05 0.4655 CQ9 29.41 52.94 17.64 0.1926 CQ10 47.05 29.41 23.52 0.4655 CQ11 5.88 76.47 17.64 0.0006 CQ12 0 70.58 29.41 0.005 Table 6: User’s choice on comparative questions (CQ are the same questions as SQ but requesting a ranking of the 2 systems) as KFC, Tesco, etc. (popular chain stores). Some users also suggested that our system should have a map and that routes taken should be plotted on them for reference. Based on the ratings and observations made by the users, we conclude that our first hypothesis that Spacebook would be more efficient for navigation than the baseline because of its speech-only interface was inconclusive. We believe so because users’ poor ratings for Spacebook may be due to the current choice of dialogue policy for navigation. It may be possible to reassure the user with a better dialogue policy with just the speech interface. However, this needs further investigation. Users found the information-search task interesting and informative when they used Spacebook (see sample user comments in table 8). They also found push information on nearby PoIs unexpected and interesting as they would not have found them otherwise. Many users believed that this could be an interesting feature that could help tourists. They also found that asking questions and finding answers was much easier with Spacebook compared to the baseline system, where sometimes users needed to type search keywords in. Another user observation was that they did not have to stop to listen to information presented by our system (as it was in speech) and could carry on walking. However, with the baseline system, they had to stop to read information off the screen. Although users in general liked the QA feature, many complained that Spacebook spoke too quickly when it was presenting answers. Some users felt that the system might lose context of the navigation task if presented with a PoI question. In contrast, some others noted Spacebook’s ability to interleave the two tasks and found it to be an advantage. Users’ enthusiasm for our system was observed when (apart from the points of interest that were in the experimental task list) they also asked spontaneous questions about James Watt, the Talbot Rice gallery, the Scottish Parliament and Edinburgh Castle. Some of the PoIs that the system pushed information about were the Royal College of Surgeons, the Flodden Wall, the Museum of Childhood, and the Scottish Storytelling Centre. Our system answered a mean of 2.5 out of 6.55 questions asked by users in leg 1 and 4.88 out of 8.5 questions in leg 2. Please note that an utterance is sent to QA if it is not parsed by the parser and therefore some utterances may not be legitmate questions themselves. Users were pushed a mean of 2.88 and 6.37 PoIs during legs 1 and 2. There were a total of 17 “tell me more” requests requesting the system to present more information (mean=1.35 ± 1.57). Evaluators who followed the subjects noted that the subjects felt difficulty using the baseline system as they sometimes struggled to see the screen 1666 1. “It’s useful when it says ’Keep walking’ but it should say it more often.” 2. “[Your system] not having a map, it was sometimes difficult to check how aware it was of my environment.” 3. “[Google] seemed to be easier to follow as you have a map as well to help.” 4. “It told me I had the bank and Kentucky Fried Chicken so I crossed the road because I knew it’d be somewhere over beside them. I thought ’OK, great. I’m going the right way.’ but then it didn’t say anything else. I like those kind of directions because when it said to go down Nicolson Street I was looking around trying to find a street sign.” 5. “The system keeps saying ’when we come to a junction, I will tell you where to go’, but I passed junctions and it didn’t say anything. It should say ’when you need to change direction, I will tell you.’” 6. “I had to stop most of the times for the system to be aware of my position. If walking very slowly, its awareness of both landmarks and streets is excellent.” Table 7: Sample user comments on the navigation task 1. “Google doesn’t *offer* any information. I would have to know what to ask for...” 2. “Since many information is given without being asked for (by your system), one can discover new places and landmarks even if he lives in the city. Great feature!!” 3. “I didn’t feel confident to ask [your system] a question and still feel it would remember my directions” 4. “Google could only do one thing at a time, you couldn’t find directions for a place whilst learning more.” 5. “If she talked a little bit slower [I would use the system for touristic purposes]. She just throws masses of information really, really quickly.” Table 8: Sample user comments on the tourist information task in bright sunlight. They sometimes had difficulty identifying which way to go based on the route plotted on the map. In comparison, subjects did not have to look at the screen when they used our system. Based on the ratings and observations made by the users about our system’s tourist information features such as answering questions and pushing PoI information, we have support for our second hypothesis: that users find a dialogue interface which integrates question-answering and navigation within a shared context to be useful for finding information about entities in the urban environment. 8 Future plans We plan to extend Spacebook’s capabilities to address other challenges in pedestrian navigation and tourist information. Many studies have shown that visible landmarks provide better cues for navigation than street names (Ashweeni and Steed, 2006; Hiley et al., 2008). We will use visible landmarks identified using the visibility engine to make navigation instructions more effective, and we plan to include entities in dialogue and visual context as candidates for PoI push, and to implement an adaptive strategy that will estimate user interests and push information that is of interest to them. We are also taking advantage of user’s local knowledge of the city to present navigation instructions only for the part of the route that the user does not have any knowledge of. These features, we believe, will make users’ experience of the interface more pleasant, useful and informative. 9 Conclusion We presented a mobile dialogue app called Spacebook to support pedestrian users in navigation and tourist information gathering in urban environments. The system is a speech-only interface and addresses navigation and tourist information in an integrated way, using a shared dialogue context. For example, using the navigational context, Spacebook can push point-of-interest information which can then initiate touristic exploration tasks using the QA module. We evaluated the system against a state-of-theart baseline (Samsung S-Voice with Google Navigation and Search) with a group of 17 users in the streets of Edinburgh. We found that users found Spacebook interesting to interact with, and that it was their system of choice for touristic information exploration tasks. These results were statistically significant. Based on observations and user ratings, we conclude that our speech-only system was less preferred for navigation and more preferred for tourist information tasks due to features such as PoI pushing and the integrated QA module, when compared to the baseline system. Younger users, who used Spacebook, even felt that they learned new facts about the city. Acknowledgments The research leading to these results was funded by the European Commission’s Framework 7 programme under grant 1667 agreement no. 270019 (SPACEBOOK project). References K. B. Ashweeni and A. Steed. 2006. A natural wayfinding exploiting photos in pedestrian navigation systems. In Proceedings of the 8th conference on Human-computer interaction with mobile devices and services. P. Bartie and W. Mackaness. 2006. Development of a speech-based augmented reality system to support exploration of cityscape. Transactions in GIS, 10:63–86. P. Bartie and W. Mackaness. 2012. D3.4 Pedestrian Position Tracker. Technical report, The SPACEBOOK Project (FP7/2011-2014 grant agreement no. 270019). P. Bartie and W. Mackaness. 2013. D3.1.2 The SpaceBook City Model. Technical report, The SPACEBOOK Project (FP7/2011-2014 grant agreement no. 270019). D. Byron, A. Koller, J. Oberlander, L. Stoia, and K. Striegnitz. 2007. Generating Instructions in Virtual Environments (GIVE): A challenge and evaluation testbed for NLG. In Proceedings of the Workshop on Shared Tasks and Comparative Evaluation in Natural Language Generation. S. Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of EMNLP-CoNLL. R. Dale, S. Geldof, and J. Prost. 2003. CORAL : Using Natural Language Generation for Navigational Assistance. In Proceedings of ACSC2003, Australia. Nina Dethlefs and Heriberto Cuay´ahuitl. 2011. Hierarchical Reinforcement Learning and Hidden Markov Models for Task-Oriented Natural Language Generation. In Proc. of ACL. B. Gittings. 2012. The Gazetteer for Scotland http://www.scottish-places.info. H. Hiley, R. Vedantham, G. Cuellar, A. Liuy, N. Gelfand, R. Grzeszczuk, and G. Borriello. 2008. Landmark-based pedestrian navigation from collections of geotagged photos. In Proceedings of the 7th Int. Conf. on Mobile and Ubiquitous Multimedia (MUM). S. Janarthanam and O. Lemon. 2011. The GRUVE Challenge: Generating Routes under Uncertainty in Virtual Environments. In Proceedings of ENLG. S. Janarthanam, O. Lemon, X. Liu, P. Bartie, W. Mackaness, T. Dalmas, and J. Goetze. 2012. Integrating location, visibility, and Question-Answering in a spoken dialogue system for Pedestrian City Exploration. In Proc. of SIGDIAL 2012, S. Korea. H. Kashioka, T. Misu, E. Mizukami, Y. Shiga, K. Kayama, C. Hori, and H. Kawai. 2011. Multimodal Dialog System for Kyoto Sightseeing Guide. In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. S.S. Keerthi, S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy. 1999. Improvements to Platt’s SMO Algorithm for SVM Classifier Design. Neural Computation, 3:637–649. J. Ko, F. Murase, T. Mitamura, E. Nyberg, M. Tateishi, I. Akahori, and N. Hataoka. 2005. CAMMIA: A Context-Aware Spoken Dialog System for Mobile Environments. In IEEE ASRU Workshop. C. Kray, K. Laakso, C. Elting, and V. Coors. 2003. Presenting Route Instructions on Mobile Devices. In Proceedings of IUI 03, Florida. R. Malaka and A. Zipf. 2000. Deep Map - challenging IT research in the framework of a tourist information system. In Information and Communication Technologies in Tourism 2000, pages 15–27. Springer. A. Mikhailsian, T. Dalmas, and R. Pinchuk. 2009. Learning foci for question answering over topic maps. In Proceedings of ACL 2009. D. Montello. 1993. Scale and multiple psychologies of space. In A. U. Frank and I. Campari, editors, Spatial information theory: A theoretical basis for GIS. M. Raubal and S. Winter. 2002. Enriching wayfinding instructions with local landmarks. In Second International Conference GIScience. Springer, USA. C.J. Shroder, W. Mackaness, and B. Gittings. 2011. Giving the Right Route Directions: The Requirements for Pedestrian Navigation Systems. Transactions in GIS, pages 419–438. N. Webb and B. Webber. 2009. Special Issue on Interactive Question Answering: Introduction. Natural Language Engineering, 15(1):1–8. P. A. Zandbergen and S. J. Barbeau. 2011. Positional Accuracy of Assisted GPS Data from HighSensitivity GPS-enabled Mobile Phones. Journal of Navigation, 64(3):381–399. 1668
2013
163
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1669–1679, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Lightly Supervised Learning of Procedural Dialog Systems Svitlana Volkova CLSP Johns Hopkins University Baltimore, MD [email protected] Pallavi Choudhury, Chris Quirk, Bill Dolan NLP Group Microsoft Research Redmond, WA pallavic,chrisq, [email protected] Luke Zettlemoyer Computer Science and Engineering University of Washington Seattle, WA [email protected] Abstract Procedural dialog systems can help users achieve a wide range of goals. However, such systems are challenging to build, currently requiring manual engineering of substantial domain-specific task knowledge and dialog management strategies. In this paper, we demonstrate that it is possible to learn procedural dialog systems given only light supervision, of the type that can be provided by non-experts. We consider domains where the required task knowledge exists in textual form (e.g., instructional web pages) and where system builders have access to statements of user intent (e.g., search query logs or dialog interactions). To learn from such textual resources, we describe a novel approach that first automatically extracts task knowledge from instructions, then learns a dialog manager over this task knowledge to provide assistance. Evaluation in a Microsoft Office domain shows that the individual components are highly accurate and can be integrated into a dialog system that provides effective help to users. 1 Introduction Procedural dialog systems aim to assist users with a wide range of goals. For example, they can guide visitors through a museum (Traum et al., 2012; Aggarwal et al., 2012), teach students physics (Steinhauser et al., 2011; Dzikovska et al., 2011), or enable interaction with a health care U: “I want to add page numbers and a title” S: “Top or Bottom of the page?” U: “Top” S: “Please select page design from the templates” (*System shows drop down menu*) U: *User selects from menu* S: “Enter header or footer content” U: “C.V.” S: “Task completed.” Figure 1: An example dialog interaction between a system (S) and user (U) that can be automatically achieved by learning from instructional web page and query click logs. system (Morbini et al., 2012; Rizzo et al., 2011). However, such systems are challenging to build, currently requiring expensive, expert engineering of significant domain-specific task knowledge and dialog management strategies. In this paper, we present a new approach for learning procedural dialog systems from taskoriented textual resources in combination with light, non-expert supervision. Specifically, we assume access to task knowledge in textual form (e.g., instructional web pages) and examples of user intent statements (e.g., search query logs or dialog interactions). Such instructional resources are available in many domains, ranging from recipes that describe how to cook meals to software help web pages that describe how to achieve goals by interacting with a user interface.1 1ehow.com,wikianswers.com 1669 There are two key challenges: we must (1) learn to convert the textual knowledge into a usable form and (2) learn a dialog manager that provides robust assistance given such knowledge. For example, Figure 1 shows the type of task assistance that we are targeting in the Microsoft Office setting, where the system should learn from web pages and search query logs. Our central contribution is to show that such systems can be built without the help of knowledge engineers or domain experts. We present new approaches for both of our core problems. First, we introduce a method for learning to map instructions to tree representations of the procedures they describe. Nodes in the tree represent points of interaction with the questions the system can ask the user, while edges represent user responses. Next, we present an approach that uses example user intent statements to simulate dialog interactions, and learns how to best map user utterances to nodes in these induced dialog trees. When combined, these approaches produce a complete dialog system that can engage in conversations by automatically moving between the nodes of a large collection of induced dialog trees. Experiments in the Windows Office help domain demonstrate that it is possible to build an effective end-to-end dialog system. We evaluate the dialog tree construction and dialog management components in isolation, demonstrating high accuracy (in the 80-90% range). We also conduct a small-scale user study which demonstrates that users can interact productively with the system, successfully completing over 80% of their tasks. Even when the system does fail, it often does so in a graceful way, for example by asking redundant questions but still reaching the goal within a few additional turns. 2 Overview of Approach Our task-oriented dialog system understands user utterances by mapping them to nodes in dialog trees generated from instructional text. Figure 2 shows an example of a set of instructions and the corresponding dialog tree. This section describes the problems that we must solve to enable such interactions, and outlines our approach for each. Knowledge Acquisition We extract task knowledge from instructional text (e.g., Figure 2, left) that describes (1) actions to be performed, such as clicking a button, and (2) places where input is needed from the user, for example to enter the contents of the footer or header they are trying to create. We aim to convert this text into a form that will enable a dialog system to automatically assist with the described task. To this end, we construct dialog trees (e.g., Figure 2, right) with nodes to represent entire documents (labeled as topics t), nodes to represent user goals or intents (g), and system action nodes (a) that enable execution of specific commands. Finally, each node has an associated system action as, which can prompt user input (e.g., with the question “Top or bottom of the page?”) and one or more user actions au that represent possible responses. All nodes connect to form a tree structure that follows the workflow described in the document. Section 3 presents a scalable approach for inducing dialog trees. Dialog Management To understand user intent and provide task assistance, we need a dialog management approach that specifies what the system should do and say. We adopt a simple approach that at all times maintains an index into a node in a dialog tree. Each system utterance is then simply the action as for that node. However, the key challenge comes in interpreting user utterances. After each user statement, we must automatically update our node index. At any point, the user can state a general goal (e.g., “I want to add page numbers”), refine their goal (e.g., “in a footer”), or both (e.g.,“I want to add page numbers in the footer”). Users can also change their goals in the process of completing the tasks. We develop a simple classification approach that is robust to these different types of user behavior. Specifically, we learn classifiers that, given the dialog interaction history, predict how to pick the next tree node from the space of all nodes in the dialog trees that define the task knowledge. We isolate two specific cases, classifying initial user utterances (Section 4) and classifying all subsequent utterances (Section 5). This approach allows us to isolate the difference in language for the two cases, and bias the second case to prefer tree nodes near the current one. The resulting approach allows for significant flexibility in traversing the dialog trees. Data and Evaluation We collected a large set of such naturally-occurring web search queries that resulted in a user click on a URL in the Microsoft Office help domain.2 We found that queries longer that 4-5 words often resembled natural language utterances that could be used for dialog interac2http://office.microsoft.com 1670 Figure 2: An example instructional text paired with a section of the corresponding dialog tree. tions, for example how do you add borders, how can I add a footer, how to insert continuous page numbers, and where is the header and footer. We also collected instructional texts from the web pages that describe how to solve 76 of the most pressing user goals, as indicated by query click log statistics. On average 1,000 user queries were associated with each goal. To some extent clickthroughs can be treated as a proxy for user frustration; popular search targets probably represent user pain points. 3 Building Dialog Trees from Instructions Our first problem is to convert sets of instructions for user goals to dialog trees, as shown in Figure 2. These goals are broadly grouped into topics (instructional pages). In addition, we manually associate each node in a dialog tree with a training set of 10 queries. For the 76 goals (246 instructions) in our data, this annotation effort took a single annotator a total of 41 hours. Scaling this approach to the entire Office help domain would require a focused annotation effort. Crucially, though, this annotation work can be carried out by non-specialists, and could even be crowdsourced (Bernstein et al., 2010). Problem Definition As input, we are given instructional text (p1 . . . pn), comprised of topics (t1 . . . tn) describing: (1) high-level user intents (e.g., t1 – “add and format page numbers”) (2) goals (g1, . . . , gk) that represent more specific user intents (e.g., g1 – “add header or footer content to a preformatted page number design”, g2 – “place the page number in the side margin of the page”). Given instructional text p1 . . . pn and queries q1 . . . qm per topic ti, our goals are as follows: Figure 3: Relationships between user queries and OHP with goals, instructions and dialog trees. - for every instructional page pi extract a topic ti and a set of goals g1 . . . gk; - for every goal gj for a topic ti, extract a set of instructions i1 . . . il; - from topics, goals and instructions, construct dialog trees f1 . . . fn (one dialog tree per topic). Classify instructions to user interaction types thereby identifying system action nodes a1 s . . . al s. Transitions between these nodes are the user actions a1 u . . . al u. Figure 2 (left) presents an example of a topic extracted from the help page, and a set of goals and instructions annotated with user action types. In the next few sections of the paper, we outline an overall system component design demonstrating how queries and topics are mapped to the dialog trees in Figure 3. The figure shows manyto-one relations between queries and topics, oneto-many relations between topics and goals, goals and instructions, and one-to-one relations between topics and dialog trees. User Action Classification We aim to classify instructional text (i1 . . . il) for every goal gj in the decision tree into four categories: binary, selection, input or none. Given a single instruction i with category au, we use a log-linear model to represent the distri1671 bution over the space of possible user actions. Under this representation, the user action distribution is defined as: p(au|i, θ) = eθ·φ(au,i) P a′u eθ·φ(au,i) , (1) where φ(au, i) ∈Rn is an n-dimensional feature representation and ⃗θ is a parameter vector we aim to learn. Features are indicator functions of properties of the instructions and a particular class. For smoothing we use a zero mean, unit variance Gaussian prior (0, 1) that penalizes ⃗θ for drifting too far from the mean, along with the following optimization function: log p(Au, θ|I) = log p(Au|I, θ) −log p(θ) = = X au,i∈(Au,I) p(au|i, θ) − X i (θ −µi)2 2σ2 i + k (2) We use L-BFGS (Nocedal and Wright, 2000) as an optimizer. Experimental Setup As described in Section 2, our dataset consists of 76 goals grouped into 30 topics (average 2-3 goals per topic) for a total of 246 instructions (average 3 instructions per goal). We manually label all instructions with user action au categories. The distribution over categories is binary=14, input=23, selection=80 and none=129. The data is skewed towards the categories none and selection. Many instruction do not require any user input and can be done automatically, e.g., “On the Insert tab, in the Header and Footer group, click Page Number”. The example instructions with corresponding user action labels are shown in Figure 2 (left) . Finally, we divide the 246 instructions into 2 sets: 80% training and 20% test, 199 and 47 instructions respectively. Results We apply the user action type classification model described in the Eq.1 and Eq.2 to classify instructions from the test set into 4 categories. In Table 1 we report classification results for 2 baselines: a majority class and heuristicbased approach, and 2 models with different feature types: ngrams and ngrams + stems. For a heuristic baseline, we use simple lexical clues to classify instructions (e.g., X or Y for binary, select Y for selection and type X, insert Y for input). Table 1 summarizes the results of mapping instructional text to user actions. Features # Features Accuracy Baseline 1: Majority – 0.53 Baseline 2: Heuristic – 0.64 Ngrams 10,556 0.89 Ngrams + Stems 12,196 0.89 Table 1: Instruction classification results. Building the Dialog Trees Based on the classified user action types, we identify system actions a1 s . . . al s which correspond to 3 types of user actions a1 s . . . al s (excluding none type) for every goal in a topic ti. This involved associating all words from an instruction il with a system action al s. Finally, for every topic we automatically construct a dialog tree as shown in Figure 2 (right). The dialog tree includes a topic t1 with goals g1 . . . g4, and actions (user actions au and system actions as). Definition 1. A dialog tree encodes a user-system dialog flow about a topic ti represented as a directed unweighted graph fi = (V, E) where topics, goals and actions are nodes of corresponding types {t1 . . . tn}, {g1 . . . gk}, {a1 . . . al} ∈V . There is a hierarchical dependency between topic, goal and action nodes. User interactions are represented by edges ti →{g1 . . . gk}, a1 u = (gj, a1) . . . al u = (ak−1, ak) ∈E. For example, in the dialog tree in Figure 2 there is a relation t1 →g4 between the topic t1 “add and format page numbers” and the goal g4 “include page of page X of Y with the page number”. Moreover, in the dialog tree, the topic level node has one index i ∈[1..n], where n is the number of topics. Every goal node includes information about its parent (topic) node and has double index i.j, where j ∈[1..k]. Finally, action nodes include information about their parent (goal) and grandparent (topic) nodes and have triple index i.j.z, where z ∈[1..l]. 4 Understanding Initial Queries This section presents a model for classifying initial user queries to nodes in a dialog tree, which allows for a variety of different types of queries. They can be under-specified, including information about a topic only (e.g., “add or delete page numbers”); partially specified, including information about a goal (e.g., “insert page number”); or over-specified, including information about an action ( e.g., “page numbering at bottom page”.) 1672 Figure 4: Mapping initial user queries to the nodes on different depth in a dialog tree. Problem Definition Given an initial query, the dialog system initializes to a state s0, searches for the deepest relevant node given a query, and maps the query to a node on a topic ti, goal gj or action ak level in the dialog tree fi, as shown in Figure 4. More formally, as input, we are given automatically constructed dialog trees f1 . . . fn for instructional text (help pages) annotated with topic, goal and action nodes and associated with system actions as shown in Figure 2 (right). From the query logs, we associate queries with each node type: topic qt, goal qg and action qa. This is shown in Figure 2 and 4. We join these dialog trees representing different topics into a dialog network by introducing a global root. Within the network, we aim to find (1) an initial dialog state s0 that maximizes the probability of state given a query p(s0|q, θ); and (2) the deepest relevant node v ∈V on topic ti, goal gj or action ak depth in the tree. Initial Dialog State Model We aim to predict the best node in a dialog tree ti, gj, al ∈V based on a user query q. A query-to-node mapping is encoded as an initial dialog state s0 represented by a binary vector over all nodes in the dialog network: s0 = [t1, g1.1, g1.2, g1.2.1 . . . , tn, gn.1, gn.1.1]. We employ a log-linear model and try to maximize initial dialog state distribution over the space of all nodes in a dialog network: p(s0|q, θ) = e P i θiφi(s0,q) P s′ 0 e P i θiφi(s′ 0,q) , (3) Optimization follows Eq. 2. We experimented with a variety of features. Lexical features included query ngrams (up to 3grams) associated with every node in a dialog tree with removed stopwords and stemming query unigrams. We also used network structural features: Accuracy Features Topic Goal Action Random 0.10 0.04 0.04 TFIDF 1Best 0.81 0.21 0.45 Lexical (L) 0.92 0.66 0.63 L + 10TFIDF 0.94 0.66 0.64 L + 10TFIDF + PO 0.94 0.65 0.65 L + 10TFIDF + QO 0.95 0.72 0.69 All above + QHistO 0.96 0.73 0.71 Table 2: Initial dialog state classification results where L stands for lexical features, 10TFIDF - 10 best tf-idf scores, PO - prompt overlap, QO - query overlap, and QHistO - query history overlap. tf-idf scores, query ngram overlap with the topic and goal descriptions, as well as system action prompts, and query ngram overlap with a history including queries from parent nodes. Experimental Setup For each dialog tree, nodes corresponding to single instructions were hand-annotated with a small set of user queries, as described in Section 3. Approximately 60% of all action nodes have no associated queries3 For the 76 goals, the resulting dataset consists of 972 node-query pairs, 80% training and 20% test. Results The initial dialog state classification model of finding a single node given an initial query is described in Eq. 3. We chose two simple baselines: (1) randomly select a node in a dialog network and (2) use a tfidf 1-best model.4 Stemming, stopword removal and including top 10 tf-idf results as features led to a 19% increase in accuracy on an action node level over baseline (2). Adding the following features led to an overall 26% improvement: query overlap with a system prompt (PO), query overlap with other node queries (QO), and query overlap with its parent queries (QHistO) . We present more detailed results for topic, goal and action nodes in Table 2. For nodes deeper in the network, the task of mapping a user query to an action becomes more challenging. Note, however, that the action node accuracy numbers actually un3There are multiple possible reasons for this: the software user interface may already make it clear how to accomplish this intent, the user may not understand that the software makes this fine-grained option available to them, or their experience with search engines may lead them to state their intent in a more coarse-grained way. 4We use cosine similarity to rank all nodes in a dialog network and select the node with the highest rank. 1673 derstate the utility of the resulting dialog system. The reason is that even incorrect node assignments can lead to useful system performance. As long as a misclassification results being assigned to a too-high node within the correct dialog tree, the user will experience a graceful failure: they may be forced to answer some redundant questions, but they will still be able to accomplish the task. 5 Understanding Query Refinements We also developed a classifier model for mapping followup queries to the nodes in a dialog network, while maintaining a dialog state that summarizes the history of the current interaction. Problem Definition Similar to the problem definition in Section 4, we are given a network of dialog trees f1 . . . fn and a query q′, but in addition we are given the previous dialog state s, which contains the previous user utterance q and the last system action as. We aim to find a new dialog state s′ that pairs a node from the dialog tree with updated history information, thereby undergoing a dialog state update. We learn a linear classifier that models p(s′|q′, q, as, θ), the dialog state update distribution, where we constrain the new state s′ to contain the new utterance q′ we are interpreting. This distribution models 3 transition types: append, override and reset. Definition 2. An append action defines a dialog state update when transitioning from a node to its children at any depth in the same dialog tree e.g., ti →gi.j (from a topic to a goal node), gi.j → ai.j.z (from a goal to an action node) etc. Definition 3. An override action defines a dialog state update when transitioning from a goal to its sibling node. It could also be from an action node5 to another in its parent sibling node in the same dialog tree e.g., gi.j−1 →gi.j (from one goal to another goal in the same topic tree), ai.j.z →ai.¬j.z (from an action node to another action node in a different goal in the same dialog tree) etc. Definition 4. A reset action defines a dialog state update when transitioning from a node in a current dialog tree to any other node at any depth in a dialog tree other than the current dialog tree e.g., ti →t¬i, (from one topic node to another topic 5A transition from ai.j.z must be to a different goal or an action node in a different goal but in the same dialog tree. (a) Updates from topic node ti (b) Updates from goal node gj (c) Updates from action node al Figure 5: Information state updates: append, reset and override updates based on Definition 2, 3 and 4, respectively, from topic, goal and action nodes. node) ti →g¬i.j (from a topic node to a goal node in a different topic subtree), etc. The append action should be selected when the user’s intent is to clarify a previous query (e.g., “insert page numbers” →“page numbers in the footer”). An override action is appropriate when the user’s intent is to change a goal within the same topic (e.g., “insert page number →“change page number”). Finally, a reset action should be used when the user’s intent is to restart the dialog (e.g., “insert page x of y” →“set default font”). We present more examples for append, override and reset dialog state update actions in Table 3. 1674 Previous Utterance, q User Utterance, q′ Transition Update Action, a inserting page numbers qt 1 add a background ti →t¬i 2, reset-T, reset how to number pages qt 2 insert numbers on pages in margin ti →si.j 1.4, append-G, append page numbers qt 3 set a page number in a footer ti →ai.j.z 1.2.1, append-A, append page number a document qt 4 insert a comment ti →g¬i.j 21.1, reset-G, reset page number qt 5 add a comment “redo” ti →a¬i.j.z 21.2.1, reset-A, reset page x of y qg 1 add a border gi.j →t¬i 6, reset-T, reset format page x of x qg 2 enter text and page numbers gi.j →gi.¬j 1.1, override-G, override enter page x of y qg 3 page x of y in footer gi.j →ai.j.z 1.3.1, append-A, append inserting page x of y qg 4 setting a default font gi.j →g¬i.j 6.1, reset-G, reset showing page x of x qg 5 set default font and style gi.j →a¬i.j.z 6.4.1, reset-A, reset page numbers bottom qa 1 make a degree symbol ai.j.z →t¬i 13, reset-T, reset numbering at bottom page qa 2 insert page numbers ai.j.z →gi.¬j 1.1, override-G, override insert footer page numbers qa 3 page number design ai.j.z−1 →ai.j.z 1.2.2, append-A, append headers page number qa 4 comments in document ai.j.z →g¬i.j 21.1, reset-G, reset page number in a footer qa 5 changing initials in a comment ai.j.z →a¬i.j.z 21.2.1, reset-A, reset Table 3: Example q and q′ queries for append, override and reset dialog state updates. Figure 5 illustrates examples of append, override and reset dialog state updates. All transitions presented in Figure 5 are aligned with the example q and q′ queries in Table 3. Dialog State Update Model We use a log-linear model to maximize a dialog state distribution over the space of all nodes in a dialog network: p(s′|q′, q, asθ) = e P i θiφi(s′,q′,as,q) P s′′ e P i θiφi(s′′,q′,as,q) , (4) Optimization is done as described in Section 3. Experimental Setup Ideally, dialog systems should be evaluated relative to large volumes of real user interaction data. Our query log data, however, does not include dialog turns, and so we turn to simulated user behavior to test our system. Our approach, inspired by recent work (Schatzmann et al., 2006; Scheffler and Young, 2002; Georgila et al., 2005), involves simulating dialog turns as follows. To define a state s we sample a query q from a set of queries per node v and get a corresponding system action as for this node; to define a state s′, we sample a new query q′ from another node v′ ∈V, v ̸= v′ which is sampled using a prior probability biased towards append: p(append)=0.7, p(override)=0.2, p(reset)=0.1. This prior distribution defines a dialog strategy where the user primarily continues the current goal and rarely resets. We simulate 1100 previous state and new query pairs for training and 440 pairs for testing. The features were lexical, including word ngrams, stems with no stopwords; we also tested network structure, such as: - old q and new q′ query overlap (QO); - q′ overlap with a system prompt as (PO); - q′ ngram overlap with all queries from the old state s (SQO); - q′ ngram overlap with all queries from the new state s′ (S′QO); - q′ ngram overlap with all queries from the new state parents (S′ParQO). Results Table 4 reports results for dialog state updates for topic, goal and action nodes. We also report performance for two types of dialog updates such as: append (App.) and override (Over.). We found that the combination of lexical and query overlap with the previous and new state queries yielded the best accuracies: 0.95, 0.84 and 0.83 for topic, goal and action node level, respectively. As in Section 4, the accuracy on the topic level node was highest. Perhaps surprisingly, the reset action was perfectly predicted (accuracy is 100% for all feature combinations, not included in figure). The accuracies for append and override actions are also high (append 95%, override 90%). Features Topic Goal Action App. Over. L 0.92 0.76 0.78 0.90 0.89 L+Q 0.93 0.80 0.80 0.92 0.83 L+P 0.93 0.80 0.79 0.91 0.85 L+Q+P 0.94 0.80 0.80 0.93 0.85 L+SQ 0.94 0.82 0.81 0.93 0.85 L+S′Q 0.93 0.80 0.80 0.91 0.90 L+S′+ParQ 0.94 0.80 0.80 0.91 0.86 L+Q+S′Q 0.94 0.81 0.81 0.91 0.88 L+SQ+S′Q 0.95 0.84 0.83 0.94 0.88 Table 4: Dialog state updates classification accuracies where L stands for lexical features, Q query overlap, P - prompt overlap, SQ - previous state query overlap, S′Q - new state query overlap, S′ParQ - new state parent query overlap. 1675 6 The Complete Dialog System Following the overall setup described in Section 2, we integrate the learned models into a complete dialog system. To evaluate the quality of the interactions with this system, we performed a small scale user study, as described here. Experimental Setup We randomly chose 8 goals, across topics, from the 76 used to train our models. For each goal, six users were shown a Microsoft Word document before and after the goal was executed. Users were allowed to interact with the system in an unconstrained manner providing a total of 48 interactions. If at any point they thought the system response was irrelevant to their intent, they could either rephrase or start over. User Study We objectively measured task completion and the quality of individual system utterances. We also asked the users to rate the system (on a scale of 1-5, with 5 being best) on various measures, including whether the system asked relevant questions. Table 5 summarizes the results. In general, users were able to complete the tasks and were reasonably satisfied. Discussion Figure 6 shows four example dialogs. User utterances ranged from single keywords to rich, sentence-like goal descriptions. For the shorter inputs, the system was generally able to identify a small cohort of high-probability intents. For longer inputs, especially with explicitly specified parameter settings, the system was usually able to elicit whatever additional information was required to complete the task. The majority of errors fell into two categories when the system: I. missed key information in the input and asked redundant questions – Figure 6 (C); II. failed to correctly identify the intent, and asked an irrelevant question – Figure 6 (D). Table 5 shows the frequency of each error type. Objective Measures Observed Rates User achieved task 83% System asked irrelevant questions 19% System asked redundant questions 18% User Satisfaction Score Std.Dev. Overall satisfaction 3.42 0.49 Questions were relevant 4.17 0.41 Ambiguity was managed 4.00 0.89 Table 5: User study results. U: “I want a page number and title at the bottom of the page” S: “Choose a page number design from the gallery of designs” U: *User selects design* S: “To insert header or footer content before the page number, type the content, and then press TAB to position the content” U: “Professional Resume” S: “Task Completed” (A) U: “Alphabetize bullets” S: “Do you want to sort a list alphabetically” U: “Sort a list” S: “Select the text in a bulleted or numbered list” U: *User selection* S: “Ascending or Descending?” U: “Ascending” S: “Task Completed” (B) U: “I want the background color to be yellow” S: “Choose a color for the document” U: “Yellow ” S: “Task completed” (C) U: “I want to change the font face and size” S: “Choose the default font” U: “No, reduce the size of the text” S: *System Restarts* (D) Figure 6: Four example dialogs from the user study, including cases that (A and B) complete successfully, (C) have a redundant question, and (D) fail to recognize the user intent. 7 Related work To the best of our knowledge, this paper presents the first effort to induce full procedural dialog systems from instructional text and query click logs. Grounded Language Learning There has been significant interest in grounded language learning. Perhaps the most closely related work learns to understand instructions and automati1676 cally complete the tasks they describe (Branavan et al., 2009; Vogel and Jurafsky, 2010; Kushman et al., 2009; Branavan et al., 2010; Artzi and Zettlemoyer, 2013). However, these approaches did not model user interaction. There are also many related approaches for other grounded language problems, including understanding game strategy guides (Branavan et al., 2011), modeling users goals in a Windows domain (Horvitz et al., 1998), learning from conversational interaction (Artzi and Zettlemoyer, 2011), learning to sportscast (Chen and Mooney, 2011), learning from event streams (Liang et al., 2009), and learning paraphrases from crowdsourced captions of video snippets (Chen and Dolan, 2011). Dialog Generation from Text Similarly to Piwek’s work (2007; 2010; 2011), we study extracting dialog knowledge from documents (monologues or instructions). However, Piwek’s approach generates static dialogs, for example to generate animations of virtual characters having a conversation. There is no model of dialog management or user interaction, and the approach does not use any machine learning. In contrast, to the best of our knowledge, we are the first to demonstrate it is possible to learn complete, interactive dialog systems using instructional texts (and nonexpert annotation). Learning from Web Query Logs Web query logs have been extensively studied. For example, they are widely used to represent user intents in spoken language dialogs (T¨ur et al., 2011; Celikyilmaz et al., 2011; Celikyilmaz and Hakkani-Tur, 2012). Web query logs are also used in many other NLP tasks, including entity linking (Pantel et al., 2012) and training product and job intent classifiers (Li et al., 2008). Dialog Modeling and User Simulation Many existing dialog systems learn dialog strategies from user interactions (Young, 2010; Rieser and Lemon, 2008). Moreover, dialog data is often limited and, therefore, user simulation is commonly used (Scheffler and Young, 2002; Schatzmann et al., 2006; Georgila et al., 2005). Our overall approach is also related to many other dialog management approaches, including those that construct dialog graphs from dialog data via clustering (Lee et al., 2009), learn information state updates using discriminative classification models (Hakkani-Tur et al., 2012; Mairesse et al., 2009), optimize dialog strategy using reinforcement learning (RL) (Scheffler and Young, 2002; Rieser and Lemon, 2008), or combine RL with information state update rules (Heeman, 2007). However, our approach is unique in the use of inducing task and domain knowledge with light supervision to assist the user with many goals. 8 Conclusions and Future Work This paper presented a novel approach for automatically constructing procedural dialog systems with light supervision, given only textual resources such as instructional text and search query click logs. Evaluations demonstrated highly accurate performance, on automatic benchmarks and through a user study. Although we showed it is possible to build complete systems, more work will be required to scale the approach to new domains, scale the complexity of the dialog manager, and explore the range of possible textual knowledge sources that could be incorporated. We are particularly interested in scenarios that would enable end users to author new goals by writing procedural instructions in natural language. Acknowledgments The authors would like to thank Jason Williams and the anonymous reviewers for their helpful comments and suggestions. References Priti Aggarwal, Ron Artstein, Jillian Gerten, Anthanasios Katsamanis, Shrikanth Narayanan, Angela Nazarian, and David R. Traum. 2012. The twins corpus of museum visitor questions. In Proceedings of LREC. Yoav Artzi and Luke Zettlemoyer. 2011. Learning to recover meaning from unannotated conversational interactions. In NIPS Workshop In Learning Semantics. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1(1):49–62. Michael S. Bernstein, Greg Little, Robert C. Miller, Bj¨orn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: a word processor with a crowd inside. In Proceedings of ACM Symposium on User Interface Software and Technology. 1677 S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of ACL. S. R. K. Branavan, Luke S. Zettlemoyer, and Regina Barzilay. 2010. Reading between the lines: learning to map high-level instructions to commands. In Proceedings of ACL. S. R. K. Branavan, David Silver, and Regina Barzilay. 2011. Learning to win by reading manuals in a monte-carlo framework. In Proceedings of ACL. Asli Celikyilmaz and Dilek Hakkani-Tur. 2012. A joint model for discovery of aspects in utterances. In Proceedings of ACL. Asli Celikyilmaz, Dilek Hakkani-T¨ur, and Gokhan T¨ur. 2011. Mining search query logs for spoken language understanding. In Proceedings of ICML. David L. Chen and William B. Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Proceedings of ACL. David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of AAAI. Myroslava Dzikovska, Amy Isard, Peter Bell, Johanna D. Moore, Natalie B. Steinhauser, Gwendolyn E. Campbell, Leanne S. Taylor, Simon Caine, and Charlie Scott. 2011. Adaptive intelligent tutorial dialogue in the beetle ii system. In Proceedings of AIED. Kallirroi Georgila, James Henderson, and Oliver Lemon. 2005. Learning user simulations for information state update dialogue systems. In Proceedings of Eurospeech. Dilek Hakkani-Tur, Gokhan Tur, Larry Heck, Ashley Fidler, and Asli Celikyilmaz. 2012. A discriminative classification-based approach to information state updates for a multi-domain dialog system. In Proceedings of Interspeech. Peter Heeman. 2007. Combining Reinforcement Learning with Information-State Update Rules. In Proceedings of ACL. Eric Horvitz, Jack Breese, David Heckerman, David Hovel, and Koos Rommelse. 1998. The Lumiere project: Bayesian user modeling for inferring the goals and needs of software users. In Proceedings of Uncertainty in Artificial Intelligence. Nate Kushman, Micah Brodsky, S. R. K. Branavan, Dina Katabi, Regina Barzilay, and Martin Rinard. 2009. WikiDo. In ACM HotNets. Cheongjae Lee, Sangkeun Jung, Kyungduk Kim, and Gary Geunbae Lee. 2009. Automatic agenda graph construction from human-human dialogs using clustering method. In Proceedings of NAACL. Xiao Li, Ye-Yi Wang, and Alex Acero. 2008. Learning query intent from regularized click graphs. In Proceedings of SIGIR. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of ACL-IJCNLP. F. Mairesse, M. Gasic, F. Jurcicek, S. Keizer, B. Thomson, K. Yu, and S. Young. 2009. Spoken language understanding from unaligned data using discriminative classification models. In Proceedings of Acoustics, Speech and Signal Processing. Fabrizio Morbini, Eric Forbell, David DeVault, Kenji Sagae, David R. Traum, and Albert A. Rizzo. 2012. A mixed-initiative conversational dialogue system for healthcare. In Proceedings of SIGDIAL. Jorge Nocedal and Stephen J. Wright. 2000. Numerical Optimization. Springer. Patric Pantel, Thomas Lin, and Michael Gamon. 2012. Mining entity types from query logs via user intent. In Proceedings of ACL. Paul Piwek and Svetlana Stoyanchev. 2010. Generating expository dialogue from monologue: Motivation, corpus and preliminary rules. In Proceedings of NAACL. Paul Piwek and Svetlana Stoyanchev. 2011. Dataoriented monologue-to-dialogue generation. In Proceedings of ACL, pages 242–247. Paul Piwek, Hugo Hernault, Helmut Prendinger, and Mitsuru Ishizuka. 2007. T2d: Generating dialogues between virtual agents automatically from text. In Proceedings of Intelligent Virtual Agents. Verena Rieser and Oliver Lemon. 2008. Learning effective multimodal dialogue strategies from wizardof-oz data: Bootstrapping and evaluation. In Proceedings of ACL. A. Rizzo, Kenji Sagae, E. Forbell, J. Kim, B. Lange, J. Buckwalter, J. Williams, T. Parsons, P. Kenny, David R. Traum, J. Difede, and B. Rothbaum. 2011. Simcoach: An intelligent virtual human system for providing healthcare information and support. In Proceedings of ITSEC. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. Knowledge Engineering Review, 21(2). Konrad Scheffler and Steve Young. 2002. Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proceedings of Human Language Technology Research. Natalie B. Steinhauser, Gwendolyn E. Campbell, Leanne S. Taylor, Simon Caine, Charlie Scott, Myroslava Dzikovska, and Johanna D. Moore. 2011. 1678 Talk like an electrician: Student dialogue mimicking behavior in an intelligent tutoring system. In Proceedings of AIED. David R. Traum, Priti Aggarwal, Ron Artstein, Susan Foutz, Jillian Gerten, Athanasios Katsamanis, Anton Leuski, Dan Noren, and William R. Swartout. 2012. Ada and grace: Direct interaction with museum visitors. In Proceedings of Intelligent Virtual Agents. G¨okhan T¨ur, Dilek Z. Hakkani-T¨ur, Dustin Hillard, and Asli C¸ elikyilmaz. 2011. Towards unsupervised spoken language understanding: Exploiting query click logs for slot filling. In Proceedings of Interspeech. Adam Vogel and Dan Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of ACL. Steve Young. 2010. Cognitive user interfaces. In IEEE Signal Processing Magazine. 1679
2013
164
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1680–1690, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Public Dialogue: Analysis of Tolerance in Online Discussions Arjun Mukherjee† Vivek Venkataraman† Bing Liu† Sharon Meraz‡ †Department of Computer Science ‡Department of Communication University of Illinois at Chicago [email protected] {vvenka6, liub, smeraz}@uic.edu Abstract Social media platforms have enabled people to freely express their views and discuss issues of interest with others. While it is important to discover the topics in discussions, it is equally useful to mine the nature of such discussions or debates and the behavior of the participants. There are many questions that can be asked. One key question is whether the participants give reasoned arguments with justifiable claims via constructive debates or exhibit dogmatism and egotistic clashes of ideologies. The central idea of this question is tolerance, which is a key concept in the field of communications. In this work, we perform a computational study of tolerance in the context of online discussions. We aim to identify tolerant vs. intolerant participants and investigate how disagreement affects tolerance in discussions in a quantitative framework. To the best of our knowledge, this is the first such study. Our experiments using real-life discussions demonstrate the effectiveness of the proposed technique and also provide some key insights into the psycholinguistic phenomenon of tolerance in online discussions. 1 Introduction Social media platforms have enabled people from anywhere in the world to express their views and discuss any issue of interest in online discussions/debates. Existing works in this context include recognition of support and oppose camps (Agrawal et al., 2003), mining of authorities and subgroups (Mayfield and Rosè, 2011; Abu-Jbara et al. (2012), dialogue act segmentation and classification (Morbini and Sagae, 2011; Boyer et al., 2011), etc. This paper probes further to study a different and important angle, i.e., the psycholinguistic phenomenon of tolerance in online discussions. Tolerance is an important concept in the field of communications. It is a subfacet of deliberation which refers to critical thinking and exchange of rational arguments on an issue among participants that seek to achieve consensus/solution (Habermas, 1984). Perhaps the most widely accepted definition of tolerance is that of Gastil (2005; 2007), who defines tolerance as a means to engage (in written or spoken communication) in critical thinking, judicious argument, sound reasoning, and justifiable claims through constructive discussion as opposed to mere coercion/egotistic clashes of ideologies. In this work, we adopt this definition, and also employ the following characteristics of tolerance (also known as “code of conduct”) (Crocker, 2005; Gutmann and Thompson, 1996) to guide our work. Reciprocity: Each member (or participant) offers proposals and justifications in terms that others could understand and accept. Publicity: Each member engages in a process that is transparent to all and each member knows with whom he is agreeing or disagreeing. Accountability: Each member gives acceptable and sound reasons to others on the various claims or proposals suggested by him. Mutual respect and civic integrity: Each member’s speech should be morally acceptable, i.e., using proper language irrespective of agreement or disagreement of views. The issue of tolerance has been actively researched in the field of communications for the past two decades, and has been investigated in multiple dimensions. However, existing studies are typically qualitative and focus on theorizing the socio-linguistic aspects of tolerance (more details in §2). With the rapid growth of social media, the large volumes of online discussions/debates offer a golden opportunity to investigate people’s implicit psyche in discussions quantitatively based on the real-life data, i.e., their tolerance levels and their arguing nature, which are of fundamental interest to several fields, e.g., communications, marketing, politics, and sociology (Dahlgren, 2005; Gastil, 2005; Moxey and 1680 Sanford, 2000). Communication and political scholars are hopeful that technologies capable of identifying tolerance levels of people on social issues (often discussed in online discussions) can render vital statistics which can be used in predicting political outcomes in elections and helpful in tailoring voting campaigns and agendas to maximize winning chances (Dahlgren, 2002). Objective: The objective of this work is twofold: 1. Identifying tolerant and intolerant participants in discussions. 2. Analyzing how disagreement affects tolerance and estimating the tipping point of such effects. To the best of our knowledge, these tasks have not been attempted quantitatively before. The first task is a classification/prediction problem. Due to the complex and interactive nature of discussions, the traditional n-gram features are no longer sufficient for accurate classification. We thus propose a generative model, called DTM, to discover some key pieces of information which characterize the nature of discussions and their participants, e.g., the arguing nature (agreeing vs. disagreeing), topic and expression distributions. These allow us to generate a set of novel features from the estimated latent variables of DTM capable of capturing authors’ tolerance psyche during discussions. The features are then used in learning to identify tolerant and intolerant authors. Our experimental results show that the proposed approach is effective and outperforms several strong baselines significantly. The second task studies the interplay of tolerance and disagreement. It is well-known that tolerance facilitates constructive disagreements, but sustained disagreements often result in a transition to destructive disagreement leading to polarization and intolerance (Dahlgren, 2005). An interesting question is: What is the tipping point of disagreement to exhibit intolerance? We take a Bayesian approach to seek an answer and discover issue-specific tipping points. Our empirical results discover some interesting relationships which are supported by theoretical studies in psychology and linguistic communications. Finally, this work also produces an annotated corpus of tolerant and intolerant users in online discussions across two domains: politics and religion. We believe this is the first such dataset and will be a valuable resource to the community. 2 Related Work Although limited work has been done on analysis of tolerance in online discussions, there are several general research areas that are related to our work. Communications: Tolerance has been an active research area in the field of communications for the past two decades. Ryfe (2005) provided a comprehensive survey of the literature. The topic has been studied in multiple dimensions, e.g., opinion and attitude (Luskin et al., 2004; Price et al., 2002), public engagement (Escobar, 2012), psychoanalysis (Slavin and Kriegman, 1992), argument repertoire (Cappella et al., 2002), etc. Tolerance has also been investigated in the domain of political communications with an emphasis on political sophistication (Gastil and Dillard, 1999), civic culture (Dahlgren, 2002), and democracy (Fishkin, 1991). These existing works study tolerance from the qualitative perspective. Our focus is quantitative analysis. Sentiment analysis: Sentiment analysis determines positive or negative opinions expressed on topics (Liu, 2012; Pang and Lee, 2008). Main tasks include aspect extraction (Hu and Liu, 2004; Popescu and Etzioni, 2005; Mukherjee and Liu, 2012c; Chen et al., 2013), opinion polarity identification (Hassan and Radev, 2010; Choi and Cardie, 2010) and subjectivity analysis (Wiebe, 2000). Although related, tolerance is different from sentiment. Sentiments are mainly indicated by sentiment terms (e.g., great, good, bad, and poor). Tolerance in discussions refers to the reception of certain views and often indicated by agreement and disagreement expressions and other features (§5). Online discussions or debates: Several works put authors in debate into support and oppose camps. Agrawal et al. (2003) used a graph based method, and Murakami and Raymond (2010) used a rule-based method. In (Mukherjee and Liu, 2012a), contention points were identified, in (Mukherjee and Liu, 2012b), various expressions in review comment discussions were mined, and in (Galley et al., 2004; Hillard et al., 2003), speaker utterances were classified into agreement, disagreement, and backchannel classes. Also related are studies on linguistic style accommodation (Mukherjee and Liu, 2012d) and user pair interactions (Mukherjee and Liu, 2013) in online debates. However, these works do not consider tolerance analysis in debate discussions, which is the focus of this work. 1681 In a similar vein, several classification methods have been proposed to recognize opinion stances and speaker sides in online debates (Somasundaran and Wiebe, 2009; Thomas et al., 2006; Bansal et al., 2008; Burfoot et al., 2011; Yessenalina et al., 2010). Lin and Hauptmann (2006) also proposed a method to identify opposing perspectives. Abu-Jbara et al. (2012) identified subgroups. Kim and Hovy (2007) studied election prediction by analyzing online discussions. Other related works studying dialogue and discourse in discussions include authority recognition (Mayfield and Rosè, 2011), dialogue act segmentation and classification (Morbini and Sagae, 2011; Boyer et al., 2011), discourse structure prediction (Wang et al., 2011). All these prior works are valuable. But they are not designed to identify tolerance or to analyze tipping points of disagreements for intolerance in discussions which are the focus of this work. 3 Discussion/Debate Data For this research, we used discussion posts from Volconvo.com. This forum is divided into various domains: Politics, Religion, Science, etc. Each domain consists of multiple discussion threads. Each thread consists of a list of posts. Our experimental data is from two domains, Politics and Religion. The data is summarized in Table 1(a). In this work, the terms users, authors and participants are used interchangeably. The full data is used for modeling, but 436 and 501 authors from Politics and Religion domains were manually labeled as being tolerant or intolerant (Table 1(c)) respectively for classification experiments. Two judges (graduate students) were used to label the data. The judges are fluent in English and were briefed on the definition of tolerance (see §1). From each domain (Politics, Religion), we randomly sampled authors having not more than 60 posts in order to reduce the labeling burden as the judges need to read all posts and see all interactions of each author before providing a label. Given all posts by an author, 𝑎 and his/her associated interactions (posts by other authors replying or quoting 𝑎), the judges were asked to provide a label for author 𝑎 as being tolerant or intolerant. In our labeling, we found that users strongly exhibit one dominant trait: tolerant or intolerant, as our data consists of topics like elections, immigration, theism, terrorism, and vegetarianism across politics and religion domains, which are often heated and thus attract people with pre-determined, strong, and polarized stances1. The judges worked in isolation (to prevent bias) during annotation/labeling and were also asked to provide a short reason for their judgment. The agreement statistics using Cohen’s kappa are given in Table 1(b), which shows substantial agreements according to the scale 2 in (Landis and Koch, 1977). This shows that tolerance as defined in §1 is quite decisive and one can decide whether a debater is exhibiting tolerant vs. intolerant quite well. To account for disagreements in labels, the judges discussed their reasons to reach a consensus. The final labeled data is reported in Table 1(c). 4 Model We now present our generative model to capture the key aspects of discussions/debates and their intricate relationships, which enable us to (1) design sophisticated features for classification and (2) perform an in-depth analysis of the interplay of disagreement and tolerance. The model is called Debate Topic Model (DTM). DTM is a semi-supervised generative model motivated by the joint occurrence of various topics; and agreement and disagreement expressions (abbreviated AD-expressions hereon) in debate posts. A typical debate post mentions a few topics (using similar topical terms) and expresses some viewpoints with one or more ADexpression types (Agreement and Disagreement) using semantically related expressions. This observation forms the basis of the generative process of our model where documents (posts) are represented as admixtures of latent topics and AD-expression types (Agreement and Disagreement). This key observation and the motivation of modeling debates are from our previous work in (Mukherjee and Liu, 2012a). In the new set 1 These hardened perspectives are theoretically supported by the polarization effect (Sunstein, 2002), and the hostile media effect, a scenario where partisans rigidly hold on to their stances (Hansen and Hyunjung, 2011). 2 Agreement levels are as follows. 𝜅∈[0, 0.2]: Poor, 𝜅∈(0.2, 0.4]:Fair, 𝜅∈(0.4, 0.6]: Moderate, 𝜅∈(0.6, 0.8]: Substantial, and 𝜅∈(0.8, 1.0]: Almost perfect agreement. Domain Posts Authors Cohen’s 𝜅 Tol. Intol. Total Politics 48605 1027 0.74 213 223 436 Religion 66835 1370 0.77 207 294 501 (a) Full Data (b) Agreement (c) Labeled data Table 1: Data statistics (Tol: Tolerant users; Intol: Intolerant users. Total = Tol. + Intol). 1682 ting, we model topics and debate expression distributions specific to authors as this work is concerned with modeling authors’ (in)tolerance nature. Making latent variable 𝜃𝐸and 𝜃𝑇author specific facilitates modeling user behaviors (§5.3). Assume we have 𝑡1…𝑇 topics and 𝑒1…𝐸 expression types in our corpus. In our case of debate posts, based upon reading various posts, we hypothesize that 𝐸 = 2 as in debates as we mostly find 2 dominant expression types: Agreement and Disagreement. Meanings of variables used in the following discussion are detailed in Table 2. In this work, a document/post is viewed as a bag of n-grams and we use terms to denote both words (unigrams) and phrases (n-grams)3. DTM is a switching graphical model performing a switch between topics and AD-expressions similar to that in (Zhao et al., 2010). The switch is done using a learned maximum entropy (MaxEnt) model. The rationale here is that topical and AD-expression terms usually play different syntactic roles in a sentence. Topical terms (e.g., “U.S. elections,” “government,” “income tax”) tend to be noun and noun phrases while expression terms (“I refute,” “how can you say,” “I’d agree”) usually contain pronouns, verbs, whdeterminers, and modals. In order to utilize the part-of-speech (POS) tag information, we place the topic/AD-expression distribution, 𝜓𝑎,𝑑,𝑗 (the prior over the indicator variable 𝑟𝑎,𝑑,𝑗) in the term plate (Figure 1) and set it using a Max-Ent model conditioned on the observed context 𝑥𝑎,𝑑,𝑗 associated with 𝑤𝑎,𝑑,𝑗 and the learned Max-Ent parameters 𝜆 (details in §4.1). In this work, we use both lexical and POS features of the previous, current and next POS tags/lexemes of the term 𝑤𝑎,𝑑,𝑗 as the contextual information, i.e., 𝑥𝑎,𝑑,𝑗= [𝑃𝑂𝑆𝑤𝑎,𝑑,𝑗−1, 𝑃𝑂𝑆𝑤𝑎,𝑑,𝑗, 𝑃𝑂𝑆𝑤𝑎,𝑑,𝑗+1, 𝑤𝑎,𝑑,𝑗−1, 𝑤𝑎,𝑑,𝑗, 𝑤𝑎,𝑑,𝑗+1], which is used to produce feature functions for Max-Ent. For phrasal terms (n-grams), all POS tags and lexemes of 𝑤𝑑,𝑗 are considered as contextual information for computing feature functions in Max-Ent. DTM has the following generative process: A. For each AD-expression type 𝑒, draw 𝜑𝑒 𝐸~𝐷𝑖𝑟(𝛽𝐸) B. For each topic t, draw 𝜑𝑡 𝑇~𝐷𝑖𝑟(𝛽𝑇) C. For each author 𝑎∈{1 … 𝐴}: i. Draw 𝜃𝑎 𝐸~𝐷𝑖𝑟(𝛼𝐸) ii. Draw 𝜃𝑎 𝑇~𝐷𝑖𝑟(𝛼𝑇) iii. For each document/post 𝑑∈{1 … 𝐷𝑎}: I. For each term 𝑤𝑎,𝑑,𝑗, 𝑗∈{1 … 𝑁𝑎,𝑑}: a. Set 𝜓𝑎,𝑑,𝑗←𝑀𝑎𝑥𝐸𝑛𝑡(𝑥𝑎,𝑑,𝑗; 𝜆) b. Draw 𝑟𝑎,𝑑,𝑗~𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖(𝜓𝑎,𝑑,𝑗) c. if (𝑟𝑎,𝑑,𝑗= 𝑒̂) // 𝑤𝑑,𝑗is an AD-expression term Draw 𝑧𝑎,𝑑,𝑗~ 𝑀𝑢𝑙𝑡(𝜃𝑎 𝐸) else // 𝑟𝑎,𝑑,𝑗= 𝑡̂, 𝑤𝑎,𝑑,𝑗is a topical term Draw 𝑧𝑎,𝑑,𝑗~ 𝑀𝑢𝑙𝑡(𝜃𝑎 𝑇) d. Emit 𝑤𝑎,𝑑,𝑗~𝑀𝑢𝑙𝑡(𝜑𝑧𝑎,𝑑,𝑗 𝑟𝑎,𝑑,𝑗) 4.1 Inference We employ posterior inference using Monte Car 3 Topics in most topic models (e.g., LDA (Blei et al., 2003)) are unigram distributions and a document is treated as an exchangeable bag-of-words. This offers a computational advantage over models considering word orders (Wallach, 2006). As our goal is to enhance the expressiveness of DTM (rather than “modeling” word order), we use 1-4 grams preserving the advantages of exchangeable modeling. Figure 1: Plate notation of DTM Variable/Function Description 𝑎; 𝐴; 𝑑 An author 𝑎; set of all authors; document, 𝑑 (𝑎, 𝑑); 𝐷𝑎 Post 𝑑 by author 𝑎; Set of all posts by 𝑎 𝑇; 𝐸; 𝑉 # of topics; expression types; vocabulary 𝑤𝑎,𝑑,𝑗; 𝑁𝑎,𝑑 𝑗𝑡ℎ term in (𝑎, 𝑑); Total # of terms in (𝑎, 𝑑) 𝜓𝑎,𝑑,𝑗 Distribution over topics and ADexpressions 𝑥𝑎,𝑑,𝑗 Associated feature context of observed 𝑤𝑎,𝑑,𝑗 𝜆 Learned Max-Ent parameters 𝑟𝑎,𝑑,𝑗∈{𝑡̂, 𝑒̂} Binary indicator/switch variable ( topic (𝑡̂) or AD-expression (𝑒̂) ) for 𝑤𝑎,𝑑,𝑗 𝜃𝑎𝑇; 𝜃𝑎𝐸(𝜃𝑎,𝐴𝑔 𝐸 , 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 ) 𝑎’s distribution over topics ; expression types (Agreement: 𝜃𝑎,𝐴𝑔 𝐸 , Disagreement: 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 ) 𝜃𝑎,𝑑 𝑇; 𝜃𝑎,𝑑,𝑡 𝑇 Topic distribution of post 𝑑 by author 𝑎; Probability mass of topic 𝑡 in 𝜃𝑎,𝑑 𝑇. 𝜃𝑎,𝑑,𝑒∈{𝐴𝑔,𝐷𝑖𝑠𝐴𝑔} 𝐸 𝜃𝑎,𝑑 𝐸; Expression type distribution of post 𝑑 by author 𝑎; Corresponding probability masses of Agreement: 𝜃𝑎,𝑑,𝑒=𝐴𝑔 𝐸 and Disagreement in 𝜃𝑎,𝑑,𝑒=𝐷𝑖𝑠𝐴𝑔 𝐸 . 𝑧𝑎,𝑑,𝑗 Topic/Expression type of 𝑤𝑎,𝑑,𝑗 𝜑𝑡 𝑇; 𝜑𝑒𝐸 Topic 𝑡’s ; Expression type 𝑒’s distribution over vocabulary terms 𝛼𝑇; 𝛼𝐸; 𝛽𝑇; 𝛽𝐸 Dirichlet priors of 𝜃𝑎𝑇; 𝜃𝑎𝐸; 𝜑𝑡 𝑇; 𝜑𝑒𝐸 𝑛𝑎,𝑡 𝐴𝑇; 𝑛𝑎,𝑒 𝐴𝐸 # of times topic 𝑡; expression type 𝑒 assigned to 𝑎 𝑛𝑡,𝑣 𝑇𝑉; 𝑛𝑒,𝑣 𝐸𝑉 # of times term 𝑣 appears in topic 𝑡; expression type 𝑒 Table 2: List of notations x ψ z r w Na, d λ Da αE θE A αT θT φT T φE E βE βT 1683 lo Gibbs sampling. Denoting the random variables {𝑤, 𝑧, 𝑟} by singular scripts{𝑤𝑘, 𝑧𝑘, 𝑟𝑘},𝑘1…𝐾, where 𝐾= ∑∑𝑁𝑎,𝑑 𝑑 𝑎 , a single iteration consists of performing the following sampling: 𝑝(𝑧𝑘= 𝑡, 𝑟𝑘= 𝑡̂|𝑊¬𝑘, 𝑍¬𝑘, 𝑅¬𝑘, 𝑤𝑘= 𝑣) ∝ exp (∑ 𝜆𝑖𝑓𝑖(𝑥𝑎,𝑑,𝑗,𝑡̂) 𝑛 𝑖=1 ) ∑ exp (∑ 𝜆𝑖𝑓𝑖(𝑥𝑎,𝑑,𝑗,𝑦) 𝑛 𝑖=1 ) 𝑦∈{𝑡෠,𝑒ෝ} × 𝑛𝑎,𝑡 𝐴𝑇 ¬𝑘+𝛼𝑇 𝑛𝑎,(·) 𝐴𝑇 ¬𝑘+𝑇𝛼𝑇× 𝑛𝑡,𝑣 𝑇𝑉 ¬𝑘+𝛽𝑇 𝑛𝑡,(·) 𝑇𝑉 ¬𝑘+𝑉𝛽𝑇 (1) 𝑝(𝑧𝑘= 𝑒, 𝑟𝑘= 𝑒̂|𝑊¬𝑘, 𝑍¬𝑘, 𝑅¬𝑘, 𝑤𝑘= 𝑣) ∝ exp (∑ 𝜆𝑖𝑓𝑖(𝑥𝑎,𝑑,𝑗,𝑒̂) 𝑛 𝑖=1 ) ∑ exp (∑ 𝜆𝑖𝑓𝑖(𝑥𝑎,𝑑,𝑗,𝑦) 𝑛 𝑖=1 ) 𝑦∈{𝑡෠,𝑒ෝ} × 𝑛𝑎,𝑒 𝐴𝐸 ¬𝑘+𝛼𝐸 𝑛𝑎,(·) 𝐴𝐸 ¬𝑘+𝐸𝛼𝐸× 𝑛𝑒,𝑣 𝐸𝑉 ¬𝑘+𝛽𝐸 𝑛𝑒,(·) 𝐸𝑉 ¬𝑘+𝑉𝛽𝐸 (2) where 𝑘= (𝑎, 𝑑, 𝑗) denotes the 𝑗𝑡ℎ term of document 𝑑 by author 𝑎 and the subscript ¬𝑘 denotes assignments excluding the term at (𝑎, 𝑑, 𝑗). Omission of the latter index denoted by (·) represents the marginalized sum over the latter index. Count variables are detailed in Table 1 (last two rows). 𝜆1…𝑛 are the parameters of the learned Max-Ent model corresponding to the 𝑛 binary feature functions 𝑓1…𝑛 for Max-Ent. The learned Max-Ent 𝜆 parameters in conjunction with the observed context, 𝑥𝑎,𝑑,𝑗 feed the supervision signal for updating the topic/expression switch parameter, 𝑟 in equations (1) and (2). The hyper-parameters for the model were set to the values 𝛽𝑇= 𝛽𝐸= 0.1 and 𝛼𝑇 = 50/𝑇, 𝛼𝐸 = 50/ 𝐸, suggested in (Griffiths and Steyvers, 2004). Model parameters were estimated after 5000 Gibbs iterations with a burn-in of 1000 iterations. The Max-Ent parameters 𝜆 were learned using 500 labeled terms in each domain (politics:- topical: 376 and AD-expression: 124; religion:- topical: 349 and AD-expression: 151) appearing at least 10 times in debate threads other than the data in Table 1 (we do so since the data in Table 1(c) is later used in the classification experiments in §6.1). Table 3 lists some top AD-expressions discovered by DTM. We see that DTM can cluster many correct AD-expressions, e.g., “I disagree”, “I refute”, “don’t accept”, etc. in disagreement; and “I agree”, “you’re correct”, “agree with you”, etc. in agreement. Further, it also discovers highly specific and more distinctive expressions beyond those used in Max-Ent training (marked blue in italics), e.g., “I don’t buy your”, “can you prove,” “you fail to”, and “you have no clue” in disagreement; and phrases like “valid point”, “rightly said”, “I do support”, and “very well put” in agreement. In §6.1, we will see that these AD-expressions serve as high quality features for predicting tolerance. Lastly, we note that DTM also estimates several pieces of useful information (e.g., ADexpressions, posterior estimates of author’s arguing nature, 𝜃𝑎 𝐸; latent topics and expressions, 𝜑𝑡 𝑇; 𝜑𝑒 𝐸, etc.). These will be used to produce a rich set of user behavioral features for characterizing tolerance in §5.3. 5 Feature Engineering We now propose features which will be used for model building to classify tolerant and intolerant authors in Table 1(c). We use three sets of features. 5.1 Language based Features of Tolerance Word and POS n-grams: As tolerance in communication is directly reflected in language usage, word n-grams are obvious features. We also use POS tags (obtained using Stanford Tagger4) as features. The rationale of using POS tag based features is that intolerant communications are often characterized by hate/egotistic speech which have pronounced use of specific part of speech (e.g., pronouns) (Zingo, 1998). Heuristic Factor Analysis: In psycholinguistics, factor analysis refers to the process of finding groups of semantically similar linguistic constructs (words/phrases). It is also called meaning extraction in (Chung and Pennebaker, 2007). As tolerance in discussions is characterized by reasoned expressions which often accompany sourcing (e.g., providing a hyperlink, making an attempt to clarify with some evidence, etc.), we compiled a list of reasoned and sourced expressions (shown in Table 4) from prior works 4 http://nlp.stanford.edu/software/tagger.shtml Disagreement expressions (𝜑𝑒=𝐷𝑖𝑠𝑎𝑔𝑟𝑒𝑒𝑚𝑒𝑛𝑡 𝐸 ) I, disagree, I don’t, I disagree, argument, reject, claim, I reject, I refute, and, your, I refuse, won’t, the claim, nonsense, I contest, dispute, I think, completely disagree, don’t accept, don’t agree, incorrect, doesn’t, hogwash, I don’t buy your, I really doubt, your nonsense, true, can you prove, argument fails, you fail to, your assertions, bullshit, sheer nonsense, doesn’t make sense, you have no clue, how can you say, do you even, contradict yourself, … Agreement expressions (𝜑𝑒=𝐴𝑔𝑟𝑒𝑒𝑚𝑒𝑛𝑡 𝐸 ) agree, I, correct, yes, true, accept, I agree, don’t, indeed correct, your, point, that, I concede, is valid, your claim, not really, would agree, might, agree completely, yes indeed, absolutely, you’re correct, valid point, argument, the argument, proves, do accept, support, agree with you, rightly said, personally, well put, I do support, personally agree, doesn’t necessarily, exactly, very well put, absolutely correct, kudos, point taken,... Table 3: Top terms (comma delimited) of two expression types. Red (bold) terms denote possible errors. Blue (italics) terms are newly discovered; rest (black) terms have been used in Max-Ent training. 1684 (Chung and Pennebaker, 2007; Flor and Hadar, 2005; Moxey and Sanford, 2000; Pennebaker, et al., 2007). 5.2 Debate Expression Features AD-expressions: As we have seen in §4, DTM can discover specific agreement and disagreement expressions in debates. We use these expressions as another feature set. Estimated ADexpressions (Table 3) serve as a principled way of performing factor analysis in debates instead of heuristic factor analysis as in Table 4 used in prior works. As the AD-expression types are modeled as Dirichlet distributions (𝜑𝐸~𝐷𝑖𝑟(𝛽𝐸)), due to the smoothing effect, each term in the vocabulary has some non-zero probability mass associated with the expression types. To ensure that the discovered expressions are representative ADexpressions, we only consider the terms in 𝜑𝐸 with 𝑝(𝑣|𝑒) = 𝜑𝑒,𝑣 𝐸> 0.001 as probability masses lower than 0.001 are more due to the smoothing effect of Dirichlet distribution than true correlation. 5.3 User Behavioral Features Here we propose several features of user interaction which reflect the socio-psychological state of tolerance while participating in discussions. We note that these features rely on the posterior estimates of latent variables 𝜃𝐸, 𝑧, and 𝑟 in DTM (§4) and are thus difficult to obtain without modeling. Overall Arguing Nature: The posterior on 𝜃𝑎𝐸 (Table 2) for each author, 𝑎 gives an estimate of 𝑎’s overall arguing nature (agreeing or disagreeing). We use the probability mass assigned to each arguing nature type as a user behavioral feature. This gives us two features 𝑓1, 𝑓2 as follows: 𝑓1(𝑎) = 𝜃𝑎,𝐴𝑔 𝐸 ; 𝑓2(𝑎) = 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 (3) Behavioral Response: As intolerant users are likely to attract more disagreement, it is naturally useful to estimate the response (agreeing vs. disagreeing) a user receives from other users. For computing behavioral response, we first use the posterior on 𝑧 to compute the distribution of ADexpressions (i.e., the relative probability masses of agreeing and disagreeing expressions) in a document 𝑑 by an author 𝑎 as follows: 𝜃𝑎,𝑑,𝐴𝑔 𝐸 = ห൛𝑗ห𝑧𝑎,𝑑,𝑗=𝐴𝑔,1≤𝑗≤𝑁𝑎,𝑑ൟห ห൛𝑗ห𝑟𝑎,𝑑,𝑗=𝑒̂,1≤𝑗≤𝑁𝑎,𝑑ൟห; 𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 = ห൛𝑗ห𝑧𝑎,𝑑,𝑗=𝐷𝑖𝑠𝐴𝑔,1≤𝑗≤𝑁𝑎,𝑑ൟห ห൛𝑗ห𝑟𝑎,𝑑,𝑗=𝑒̂,1≤𝑗≤𝑁𝑎,𝑑ൟห (4) Now to get the overall behavioral response of an author, 𝑎 we take the expected value of the agreeing and disagreeing responses that 𝑎 received from other authors 𝑎′ who replied to or quoted 𝑎’s posts. The expectations below are taken over all posts 𝑑′ by 𝑎′ which reply/quote posts of 𝑎. 𝑓3(𝑎) = 𝐸[𝜃𝑎′ 𝑑′,𝐴𝑔 𝐸 ]; 𝑓4(𝑎) = 𝐸ൣ𝜃𝑎′ 𝑑′,𝐷𝑖𝑠𝐴𝑔 𝐸 ൧ (5) Equality of Speech: In communication literature (Dahlgren, 2005; Habermas, 1984), equality is theorized as an essential element of tolerance. Each participant must be able to participate on an equal footing with others without anybody dominating the discussion. In online debates, we can measure this phenomenon using the following feature: 𝑓5(𝑎) = 𝐸ቂቀ # 𝑜𝑓 𝑝𝑜𝑠𝑡𝑠 𝑏𝑦 𝑎 𝑖𝑛 𝑡ℎ𝑟𝑒𝑎𝑑 𝑙 # 𝑜𝑓 𝑝𝑜𝑠𝑡𝑠 𝑖𝑛 𝑡ℎ𝑟𝑒𝑎𝑑 𝑙 ቁ𝐸[𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 ]ቃ (6) where the inner expectation is taken over all posts of 𝑎 in thread 𝑙 and the outer expectation is taken over all threads 𝑙 in which 𝑎 participated. The above definition computes the aggressive posting behavior of author 𝑎 whereby he tires to dominate the thread by posting more than others. The aggressive posting behavior is weighted by author’s disagreeing nature because a person usually exhibits a dominating nature when he pushes hard to establish his ideology (which is often in disagreement with others) (Moxey and Sanford, 2000). Topic Shifts: An interesting phenomenon of human (social) psyche is that when people are unable to logically argue their stances and feel they are losing the debate, they often try to belittle/deride others by pulling unrelated topics into discussion (Slavin and Kriegman, 1992). This is Factor: Reasoning words/phrases because, because of, since, reason, reason being, reason is, reason why, due to, owing to, as in, therefore, thus, henceforth, hence, implies, implies that, implying, hints, hinting, hints towards, it follows that, it turns out, conclude, consequence, consequently, the cause, rationale, the rationale, justification, the justification, provided, premise, assumption, on the proviso, in spite, … Factor: Sourcing words/phrases presence of hyperlinks/urls, source, reference, for example, for instance, namely, to explain, to detail, to clarify, to elucidate, to illustrate, to be precise, furthermore, moreover, apart from, besides, we find, … Table 4: Heuristic Factor Analysis (HFA). Words/Phrases in each factor compiled from prior works in psycholinguistics. 1685 referred to as topic shifts. Topic shifts thus have a relation with tolerance in deliberation. StromerGalley (2005) reported that if the discussion is off topic, then tolerance or deliberation cannot meet its objective of deep consideration of an issue. Hence, the average topic shifts of an author, 𝑎 across various posts in a thread can serve as a good feature for measuring tolerance. We use the posterior on per-document topic distribution, 𝜃𝑎,𝑑,𝑡 𝑇 = ห൛𝑗ห𝑧𝑎,𝑑,𝑗=𝑡,1≤𝑗≤𝑁𝑎,𝑑ൟห ห൛𝑗ห𝑟𝑎,𝑑,𝑗=𝑡̂,1≤𝑗≤𝑁𝑎,𝑑ൟห to measure topic shifts using KL-Divergence as follows: 𝑓6 = 𝐸ቂavg𝑑,𝑑′∈ 𝑡ℎ𝑟𝑒𝑎𝑑 𝑙ቀ𝐷𝐾𝐿൫𝜃𝑎,𝑑 𝑇||𝜃𝑎,𝑑′ 𝑇 ൯ቁቃ (7) We first compute author, 𝑎’s average topic shifts in a thread, 𝑙 which measures his topic shifts in 𝑙. But this only gives us his behavior in one thread. To capture his overall behavior, we take the expected value of this behavior over all threads in which 𝑎 participated. We take average KLdivergence (KL-Div.) over all pairs of posts by 𝑎 in a given thread to account for the asymmetry of KL-Div. Finally, we note that by no means do we claim that the mere presence and a large value of any of the above features imply that a user is intolerant or tolerant. They are indicators of the phenomenon of tolerance in discussions/debates. The actual prediction is done using the learned models in §6.1. 6 Experimental Evaluation We now detail the experiments that investigate the strengths of features in §5. In particular, we first consider the task of classifying whether an author is tolerant or intolerant in discussions. Then, we analyze how disagreement affects tolerance. 6.1 Tolerant and Intolerant Classification Here, we show that the features in §5 can help build accurate models for predicting tolerance. We employ a linear kernel 5 SVM (using the SVMLight system (Joachims, 1999)) and report 5fold cross validation (CV) results on the task of predicting the socio-psychological nature of users’ communication: tolerant vs. intolerant in politics and religion domains (Table 1(c)). Note that for each fold of 5-fold CV, DTM was run on the full data of each domain (Table 1(a)) excluding the users (and their associated posts) in the test set of that fold for generating the features of the training instances (users). The learned DTM 5 Other kernels (rbf, poly, sigmoid) did not perform as well. was then fitted (using the approach in (Hofmann, 1999)) to the test set users and their posts for generating the features of the test instances. To investigate the effectiveness of the proposed framework, we incrementally add feature sets starting with the baseline features. Word unigrams and bigrams (inclusive of unigrams)6 serve as our first baseline (B1a, B1b). Word + POS bigrams is our second baseline (B2). “Word” in B2 uses bigrams as B1b gives better results. B2 + Heuristic Factor Analysis (HFA) (Table 4) serve as our third baseline (B3). Table 5 shows the experiment results. We note the following: 1. Across both domains, adding POS bigrams slightly improves classification accuracy and F1-score beyond standard word unigrams and bigrams. Feature selection using information gain (IG) does not help much. 2. Using heuristic factor analyses (HFA) of reasoned and sourced expressions (Table 4) brings about 1% and 2% improvement in accuracy in politics and religion domains respectively. 3. Debate expression features (DE) in §5.2 and user behavioral features (UB) in §5.3 produced from DTM progressively improve classification accuracies by 4% and 8% in politics domains and 5% and 6% in religion domains. The improvements are also statistically significant. In summary, we can see that modeling made a major impact. It improved the accuracy by about 10% than traditional unigram and bigram baselines. This shows that the debate expressions and user behaviors computed using the DTM model can capture various dimensions of (in)tolerance not captured by n-grams. 6.2 How Disagreement affects Tolerance? We now quantitatively study the effect of disagreement on tolerance. We recall from §1 that tolerance indicates constructive discussion and allows disagreement. Some level of disagreement is often times an integral component of deliberation and tolerance (Cappella et al., 2002). Disagreements, however, can be either constructive or destructive. The distinction is that the former is aimed at arriving at a consensus or solution, while the latter leads to polarization and intolerance (Sunstein, 2002). It was also shown in (Dahlgren, 2005) that sustained disa 6 Higher order n-grams did not result in better results. 1686 greement often takes a transition towards destructive disagreement and is likely to lead to intolerance. Similar phenomena was also identified in psychology literature (Critchley, 1964). In such cases, the participants often stubbornly stick to an extreme attitude, which eventually results in intolerance and defeats the very purpose of deliberative discussion. An intriguing research question is: What is the relationship between disagreement and intolerance? The question is interesting from both the communication and psycholinguistic perspectives. The best of our knowledge, this is the first attempt towards seeking an answer. We work in the context of five issues/threads in real-life online debates. To derive quantitative and definite conclusions, it is required to perform the following tasks: • For each issue, empirically investigate in expectation the tipping point of disagreement beyond which a user tends to be intolerant. • Further, investigate the confidence on the estimated tipping point (i.e., what is the likelihood that the estimated tipping point is statistically significant instead of chance alone). We formalize the above tasks in the Bayesian setting. Recall from Table 2 of §4, that 𝜃𝑎,𝐴𝑔 𝐸 (respectively, 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 ) are the estimates of agreeing and disagreeing nature of an author and 𝜃𝑎,𝐴𝑔 𝐸 + 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 = 1. Let 𝑇𝑃(𝜏) denote the event that in expectation a threshold value of 0 < 𝜏< 1 serves as a tipping point of disagreement beyond which intolerance is exhibited. Note that we emphasize the term “in expectation” (taken over all authors). We do not mean that every author whose disagreement, 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 > 𝜏, is intolerant. The empirical likelihood of 𝑇𝑃(𝜏) can be expressed by the following probability expression: ℒ൫𝑇𝑃(𝜏)൯= 𝐸ൣ𝑃൫𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 > 𝜏|𝑎= 𝐼൯−𝑃൫𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 > 𝜏|𝑎= 𝑇൯൧ (8) The events 𝑎 = 𝐼 and 𝑎= 𝑇 denote that author 𝑎 is intolerant and tolerant respectively. The expectation is taken over authors. Showing that 𝜏 indeed serves as the tipping point of disagreement to exhibit intolerance corresponds is to rejecting the null hypothesis that the probabilities in (8) are equal. We employ a Fisher’s exact test to test significance and report confidence measures (using p-values) for the tipping point thresholds. The results are shown in Table 6. The threshold 𝜏 is computed using the entropy method in (Fayyad and Irani, 1993) as follows: We first fit our previously learned model (using the data in Table 1 (a)) to the new threads in Table 6 and its users and posts to obtain the estimates of 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 and other latent variables for feature generation. The learned classifier in §6.1 is used to predict the nature of users (tolerant vs. Feature Setting Politics Religion Precision Recall F1 Accuracy Precision Recall F1 Accuracy B1a: Word unigrams 64.1 86.3 73.7 70.1 61.9 86.8 72.6 71.9 Word unigram + IG 64.5 86.2 73.9 70.2 62.7 86.9 72.9 71.9 B1b: Word bigrams 66.8 87.8 75.9 72.4 64.9 89.1 75.9 75.1 B2: W+POS bigrams 68.5 86.8 76.4 73.7 66.6 88.4 76.8 76.7 B3: B2 + HFA(Table 4) 69.2 90.5 78.1 75.2 66.4 90.6 76.8 77.5 B3 + DE (§5.2) 74.7 91.3 82.4† 79.5† 70.2 92.8 80.8† 82.1† B3 + DE + UB (§5.3) 76.1 92.2 83.1‡ 83.2‡ 71.7 93.4 82.1‡ 83.3‡ Table 5: Precision, Recall, F1 score on the tolerant class, and Accuracy for different feature settings across 2 domains. DE: Debate expression features (AD-expressions, Table3, §5.2). UB: User behavioral features (§5.3). Improvements in F1 and Accuracy using DTM features (beyond baselines, B1-B3) are statistically significant (†: p<0.02; ‡: p<0.01) using paired t-test with 5-fold CV. Thread/Issue # Posts # Users % InTol. 𝐸ൣ𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 ൧ 𝜏 p-value Repeal Healthcare 1823 33 39.9 0.57 0.65 0.02 Europe’s Collapse 1824 33 42.5 0.61 0.61 0.01 Obama Euphoria 1244 26 30.7 0.66 0.71 0.01 Socialism 831 49 44.8 0.69 0.48 0.03 Abortion 1232 58 48.4 0.78 0.37 0.01 Table 6: Tipping points of disagreements for intolerance (𝜏) of different issues. 𝐸ൣ𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 ൧: the expected disagreement over all posts in each issue/thread, # Posts: the total number of posts, # Users: the total number of users/authors, % Intol: % of intolerant users in each thread, 𝜏: the estimated tipping point, and p-value: computed from two-tailed Fisher’s exact test. 1687 intolerant) in the new threads7. Then, for each user we have his predicted deliberative (social) psyche (Tolerant vs. Intolerant) and also his overall disagreeing nature exhibited in that thread (the posterior on 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 ∈ [0, 1]). For a thread, tolerant and intolerant users (data points) span the range [0, 1] attaining different values for 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 . Each candidate tipping point of disagreement, 0 ≤𝜏′ ≤1 results in a binary partition of the range with each partition containing some proportion of tolerant and intolerant users. We compute the entropy of the partition for every candidate tipping point in the range [0, 1]. The final tipping point threshold, 𝜏 is chosen such that it minimizes the partition entropy based on the binary cut-point method in (Fayyad and Irani, 1993). Since we perform a thread level analysis, the results in Table 6 are thread/issue specific. We note the following from Table 6: 1. Across all threads/issues, we find that the expected disagreement over all posts, 𝑑, 𝐸ൣ𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 ൧ > 0.5 showing that in discussions of the reported issues, disagreement predominates. 2. 𝐸ൣ𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 ൧ also gives an estimate of overall heat in the issue being discussed. We find sensitive issues like abortion and socialism being more heated than healthcare, Obama, etc. 3. The percentage of intolerant users increases with the expected overall disagreement in the issue except for the issue Obama euphoria. 4. The estimated tipping point of disagreement to exhibit intolerance, 𝜏 happens to vary inversely with the expected disagreement, 𝐸ൣ𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 ൧ except the issue Obama euphoria. This reflects that as overall disagreement in the issue increases, the tipping point of intolerance decreases, i.e., due to high discussion heat, people are likely to turn intolerant even with relatively small amount of disagreement. This finding dovetails with prior studies in psychology (Rokeach and Fruchter, 1956) that heated discussions are likely to reduce thresh 7 Although this prediction may not be perfect, it can be regarded as considerably reliable to study the trend of tolerance across different issues as our classifier (in §6.1) attains a high (83%) classification accuracy using the full feature set. As judging all users across all threads would require reading about 7000 posts, for confirmation, we randomly sampled 30 authors across various threads for labeling by our judges. 28 out of 30 predictions produced by the classifier correlated with the judges' labels, which should be sufficiently accurate for our analysis. olds of reception leading to dogmatism, egotism, and intolerance. Table 6 shows that for moderately heated issues (healthcare, Europe’s collapse), in expectation, author’s disagreement 𝜃𝑎,𝐷𝑖𝑠𝐴𝑔 𝐸 should exceed 61-65% to exhibit intolerance. However, for sensitive issues, we find that the tipping point is much lower, abortion: 37%; socialism: 48%. 5. The issue Obama Euphoria is an exception to other issues’ trends. Even though in expectation, it has 𝐸ൣ𝜃𝑎,𝑑,𝐷𝑖𝑠𝐴𝑔 𝐸 ൧ = 66% overall disagreement, the percentage of intolerant users remains the lowest (30%) and the tipping point attains a highest value (𝜏 = 0.71), showing more tolerance on the issue. A plausible reason could be that Obama is somewhat more liked and hence attracts less intolerance from users8. 6. The p-values of the estimated tipping points, 𝜏 across all issues are statistically significant at 98-99% confidence levels. 7 Conclusion This work performed a deep analysis of the sociopsychological and psycholinguistic phenomenon of tolerance in online discussions, which is an important concept in the field of communications. A novel framework is proposed, which is capable of characterizing and classifying tolerance in online discussions. Further, a novel technique was also proposed to quantitatively evaluate the interplay of tolerance and disagreement. Our empirical results using real-life online discussions render key insights into the psycholinguistic process of tolerance and dovetail with existing theories in psychology and communications. To the best of our knowledge, this is the first such quantitative study. In our future work, we want to further this research and study the role of diversity of opinions in the context of tolerance and its relation to polarization. Acknowledgments This work was supported in part by a grant from National Science Foundation (NSF) under grant no. IIS-1111092. 8 This observation may be linked to the political phenomenon of “democratic citizenship through exposure to diverse perspectives” (Mutz, 2006) where it was shown that exposure to heterogeneous opinions (i.e., greater disagreement), often enhances tolerance. 1688 References Abu-Jbara, A., Dasigi, P., Diab, M. and Dragomir Radev. 2012. Subgroup detection in ideological discussions. ACL. Agrawal, R. Rajagopalan, S. Srikant, R. Xu. Y. 2003. Mining newsgroups using networks arising from social behavior. WWW. Bansal, M., Cardie, C., and Lee, L. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification framework. In COLING. Blei, D., A. Ng, and M. Jordan. 2003. Latent Dirichlet Allocation. In JMLR. Boyer, K.; Grafsgaard, J.; Ha, E. Y.; Phillips, R.; and Lester, J. 2011. An affect-enriched dialogue act classification model for task-oriented dialogue. In ACL. Burfoot, C., S. Bird, and T. Baldwin. 2011. Collective Classification of Congressional Floor-Debate Transcripts. In ACL. Cappella, J. N., Price, V., and Nir, L. 2002. Argument repertoire as a reliable and valid measure of opinion quality: electronic dialogue during campaign 2000. Political Communication. Political Communication. Chen, Z., Mukherjee, A., Liu, B., Hsu, M., Castellanos, M., Ghosh, R. 2013. Leveraging Multi-Domain Prior Knowledge in Topic Models. In IJCAI. Chung, C. K., and Pennebaker, J. W. 2007. Revealing people’s thinking in natural language: Using an automated meaning extraction method in open– ended self–descriptions,. J. of Research in Personality. Choi, Y. and Cardie, C. 2010. Hierarchical sequential learning for extracting opinions and their attributes. In ACL. Critchley, M. 1964. The neurology of psychotic speech. The British Journal of Psychiatry. Crocker, D. A. 2005. Tolerance and Deliberative Democracy. UMD Technical Report. Dahlgren, P. 2002. In search of the talkative public: Media, deliberative democracy and civic culture. Javnost/The Public. Dahlgren, Peter. 2005. The Internet, Public Spheres, and Political Communication: Dispersion and Deliberation. Political Communication. Escobar, O. 2012. Public Dialogue and Deliberation: A communication perspective for publicengagement practitioners. Handbook and Technical Report. Fayyad, U., and Irani, K. 1993. Multi-interval discretization of continuous-valued attributes for classification learning. In UAI. Fishkin, J. 1991. Democracy and deliberation. New Haven, CT: Yale University Press. Flor, M., and Hadar, U. 2005. The production of metaphoric expressions in spontaneous speech: A controlled-setting experiment. Metaphor and Symbol. Galley, M., K. McKeown, J. Hirschberg, E. Shriberg. 2004. Identifying agreement and disagreement in conversational speech: Use of Bayesian networks to model pragmatic dependencies. In ACL. Gastil, J. 2005. Communication as Deliberation: A Non-Deliberative Polemic on Communication Theory. Univ. of Washington, Technical Report. Gastil, J., and Dillard, J. P. 1999. Increasing political sophistication through public deliberation. Political Communication. Gastil, John. 2007. Political communication and deliberation. Sage Publications. Griffiths, T. and Steyvers, M. 2004. Finding scientific topics. In PNAS. Gutmann, A., and Thompson, D. F. 1996. Democracy and disagreement. Harvard University Press. Habermas. 1984. The theory of communicative action: Reason and rationalization of society. (T. McCarthy, Trans. Vol. 1). Boston, MA: Beacon Press. Hillard, D., Ostendorf, M., and Shriberg, E. 2003. Detection of Agreement vs. Disagreement in Meetings: Training with Unlabeled Data. HLTNAACL. Hansen, G. J., and Hyunjung, K. 2011. Is the media biased against me? A meta-analysis of the hostile media effect research. Communication Research Reports, 28, 169-179. Hassan, A. and Radev, D. 2010. Identifying text polarity using random walks.In ACL. Hofmann, T. 1999. Probabilistic latent semantic analysis. In UAI. Hu, M. and Liu, B. 2004. Mining and summarizing customer reviews. In SIGKDD. Joachims, T. Making large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning, B. Schölkopf and C. Burges and A. Smola (ed.), MIT-Press, 1999. Kim, S. and Hovy, E. 2007. Crystal: Analyzing predictive opinions on the web. In EMNLP-CoNLL. Landis, J. R. and Koch, G. G. 1977. The measurement of observer agreement for categorical data. Biometrics, 159–174. Lin, W. H., and Hauptmann, A. 2006. Are these documents written from different perspectives?: a test of different perspectives based on statistical distribution divergence. In ACL. Liu, B. 2012. Sentiment Analysis and Opinion Mining. Morgan & Claypool Publisher, USA. Luskin, R. C., Fishkin, J. S., and Iyengar, S. 2004. Considered Opinions on U.S. Foreign Policy: Faceto-Face versus Online Deliberative Polling. International Communication Association, New Orleans, LA. Mayfield, E. and Rose, C. P. 2011. Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model. In ACL. Moxey, L. M., and Sanford, A. J. 2000. Communicating quantities: A review of psycholinguistic evidence of how expressions determine perspectives. Applied Cognitive Psychology. Morbini, F. and Sagae, K. 2011. Joint Identification and Segmentation of Domain-Specific Dialogue Acts for Conversational Dialogue Systems. In ACL. 1689 Murakami, A., and Raymond, R. 2010. Support or Oppose? Classifying Positions in Online Debates from Reply Activities and Opinion Expressions. In COLING. Mukherjee, A. and Liu, B. 2013. Discovering User Interactions in Ideological Discussions. In ACL. Mukherjee, A. and Liu, B. 2012a. Mining Contentions from Discussions and Debates. In KDD. Mukherjee, A. and Liu, B. 2012b. Modeling review Comments. In ACL. Mukherjee, A. and Liu, B. 2012c. Aspect Extraction through Semi-Supervised Modeling. In ACL. Mukherjee, A. and Liu, B. 2012d. Analysis of Linguistic Style Accommodation in Online Debates. In COLING. Mutz, D. 2006. Hearing the Other Side: Deliberative Versus Participatory Democracy. Cambridge: Cambridge University Press, 2006. Pang, B. and Lee, L. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval. Pennebaker, J. W., Chung, C. K., Ireland, M., Gonzales, A., and Booth, R. J. 2007. The development and psychometric properties of LIWC2007. LIWC.Net. Popescu, A. and Etzioni, O. 2005. Extracting product features and opinions from reviews. In EMNLP. Price, V., Cappella, J. N., and Nir, L. 2002. Does disagreement contribute to more deliberative opinion? Political Communication. Rokeach, M., and Fruchter, B. 1956. A factorial study of dogmatism and related concepts. The Journal of Abnormal and Social Psychology. Ryfe, D. M. (2005). Does deliberative democracy work? Annual review of political science. Slavin, M. O., and Kriegman, D. 1992. The adaptive design of the human psyche: Psychoanalysis, evolutionary biology, and the therapeutic process. Guilford Press. Somasundaran, S., J. Wiebe. 2009. Recognizing stances in online debates. In ACL-IJCNLP. Stromer-Galley, J. 2005. Conceptualizing and Measuring Coherence in Online Chat. Annual Meeting of the International Communication Association. Sunstein, C. R. 2002. The law of group polarization. Journal of political philosophy. Thomas, M., B. Pang and L. Lee. 2006. Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In EMNLP. Wang, L., Lui, M., Kim, S. N., Nivre, J., and Baldwin, T. 2011. Predicting thread discourse structure over technical web forums. In EMNLP. Wiebe, J. 2000. Learning subjective adjectives from corpora. In Proc. of National Conference on AI. Yessenalina, A., Yue, A., Cardie, C. 2010. Multilevel structured models for document-level sentiment classification. In EMNLP. Zhao, X., J. Jiang, H. Yan, and X. Li. 2010. Jointly modeling aspects and opinions with a MaxEntLDA hybrid. In EMNLP. Zingo, M. T. (1998). Sex/gender Outsiders, Hate Speech, and Freedom of Expression: Can They Say that about Me? Praeger Publishers. 1690
2013
165
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1691–1701, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Offspring from Reproduction Problems: What Replication Failure Teaches Us Antske Fokkens and Marieke van Erp The Network Institute VU University Amsterdam Amsterdam, The Netherlands {a.s.fokkens,m.g.j.van.erp}@vu.nl Marten Postma Utrecht University Utrecht, The Netherlands [email protected] Ted Pedersen Dept. of Computer Science University of Minnesota Duluth, MN 55812 USA [email protected] Piek Vossen The Network Institute VU University Amsterdam Amsterdam, The Netherlands [email protected] Nuno Freire The European Library The Hague, The Netherlands [email protected] Abstract Repeating experiments is an important instrument in the scientific toolbox to validate previous work and build upon existing work. We present two concrete use cases involving key techniques in the NLP domain for which we show that reproducing results is still difficult. We show that the deviation that can be found in reproduction efforts leads to questions about how our results should be interpreted. Moreover, investigating these deviations provides new insights and a deeper understanding of the examined techniques. We identify five aspects that can influence the outcomes of experiments that are typically not addressed in research papers. Our use cases show that these aspects may change the answer to research questions leading us to conclude that more care should be taken in interpreting our results and more research involving systematic testing of methods is required in our field. 1 Introduction Research is a collaborative effort to increase knowledge. While it includes validating previous approaches, our experience is that most research output in our field focuses on presenting new approaches, and to a somewhat lesser extent building upon existing work. In this paper, we argue that the value of research that attempts to replicate previous approaches goes beyond simply validating what is already known. It is also an essential aspect for building upon existing approaches. Especially when validation fails or variations in results are found, systematic testing helps to obtain a clearer picture of both the approach itself and of the meaning of state-of-theart results leading to a better insight into the quality of new approaches in relation to previous work. We support our claims by presenting two use cases that aim to reproduce results of previous work in two key NLP technologies: measuring WordNet similarity and Named Entity Recognition (NER). Besides highlighting the difficulty of repeating other researchers’ work, new insights about the approaches emerged that were not presented in the original papers. This last point shows that reproducing results is not merely part of good practice in science, but also an essential part in gaining a better understanding of the methods we use. Likewise, the problems we face in reproducing previous results are not merely frustrating inconveniences, but also pointers to research questions that deserve deeper investigation. We investigated five aspects that cause experimental variation that are not typically described in publications: preprocessing (e.g. tokenisation), experimental setup (e.g. splitting data for cross-validation), versioning (e.g. which version of WordNet), system output (e.g. the exact features used for individual tokens in NER), and system variation (e.g. treatment of ties). As such, reproduction provides a platform for systematically testing individual aspects of an approach that contribute to a given result. What is the influence of the size of the dataset, for example? How does using a different dataset affect the results? What is a reasonable divergence between different runs of the same experiment? Finding answers to these questions enables us to better interpret our state-of-the-art results. 1691 Moreover, the experiments in this paper show that even while strictly trying to replicate a previous experiment, results may vary up to a point where they lead to different answers to the main question addressed by the experiment. The WordNet similarity experiment use case compares the performance of different similarity measures. We will show that the answer as to which measure works best changes depending on factors such as the gold standard used, the strategy towards partof-speech or the ranking coefficient, all aspects that are typically not addressed in the literature. The main contributions of this paper are the following: 1) An in-depth analysis of two reproduction use cases in NLP 2) New insights into the state-of-the-art results for WordNet similarities and NER, found because of problems in reproducing prior research 3) A categorisation of aspects influencing reproduction of experiments and suggestions on testing their influence systematically The code, data and experimental setup for the WordNet experiments are available at http://github.com/antske/ WordNetSimilarity, and for the NER experiments at http://github.com/Mvanerp/ NER. The experiments presented in this paper have been repeated by colleagues not involved in the development of the software using the code included in these repositories. The remainder of this paper is structured as follows. In Section 2, previous work is discussed. Sections 3 and 4 describe our real-world use cases. In Section 5, we present our observations, followed by a more general discussion in Section 6. In Section 7, we present our conclusions. 2 Background This section provides a brief overview of recent work addressing reproduction and benchmark results in computer science related studies and discusses how our research fits in the overall picture. Most researchers agree that validating results entails that a method should lead to the same overall conclusions rather than producing the exact same numbers (Drummond, 2009; Dalle, 2012; Buchert and Nussbaum, 2012, etc.). In other words, we should strive to reproduce the same answer to a research question by different means, perhaps by re-implementing an algorithm or evaluating it on a new (in domain) data set. Replication has a somewhat more limited aim, and simply involves running the exact same system under the same conditions in order to get the exact same results as output. According to Drummond (2009) replication is not interesting, since it does not lead to new insights. On this point we disagree with Drummond (2009) as replication allows us to: 1) validate prior research, 2) improve on prior research without having to rebuild software from scratch, and 3) compare results of reimplementations and obtain the necessary insights to perform reproduction experiments. The outcome of our use cases confirms the statement that deeper insights into an approach can be obtained when all resources are available, an observation also made by Ince et al. (2012). Even if exact replication is not a goal many strive for, Ince et al. (2012) argue that insightful reproduction can be an (almost) impossible undertaking without the source code being available. Moreover, it is not always clear where replication stops and reproduction begins. Dalle (2012) distinguishes levels of reproducing results related to how close they are to the original work and how each contributes to research. In general, an increasing awareness of the importance of reproduction research and open code and data can be observed based on publications in high-profile journals (e.g. Nature (Ince et al., 2012)) and initiatives such as myExperiment.1 Howison and Herbsleb (2013) point out that, even though this is important, often not enough (academic) credit is gained from making resources available. What is worse, the same holds for research that investigates existing methods rather than introducing new ones, as illustrated by the question that is found on many review forms ‘how novel is the presented approach?’. On the other hand, initiatives for journals addressing exactly this issue (Neylon et al., 2012) and tracks focusing on results verification at conferences such as VLDB2 show that this opinion is not universal. A handful of use cases on reproducing or replicating results have been published. Louridas and Gousios (2012) present a use case revealing that source code alone is not enough for reproducing 1http://www.myexperiment.org 2http://www.vldb.org/2013/ 1692 results, a point that is also made by Mende (2010) who provides an overview of all information required to replicate results. The experiments in this paper provide use cases that confirm the points brought out in the literature mentioned above. This includes both observations that a detailed level of information is required for truly insightful reproduction research as well as the claim that such research leads to better understanding of our techniques. Furthermore, the work in this paper relates to Bikel (2004)’s work. He provides all information needed in addition to Collins (1999) to replicate Collins’ benchmark results. Our work is similar in that we also aim to fill in the blanks needed to replicate results. It must be noted, however, that the use cases in this paper have a significantly smaller scale than Bikel’s. Our research distinguishes itself from previous work, because it links the challenges of reproduction to what they mean for reported results beyond validation. Ruml (2010) mentions variations in outcome as a reason not to emphasise comparisons to benchmarks. Vanschoren et al. (2012) propose to use experimental databases to systematically test variations for machine learning, but neither links the two issues together. Raeder et al. (2010) come closest to our work in a critical study on the evaluation of machine learning. They show that choices in the methodology, such as data sets, evaluation metrics and type of cross-validation can influence the conclusions of an experiment, as we also find in our second use case. However, they focus on the problem of evaluation and recommendations on how to achieve consistent reproducible results. Our contribution is to investigate how much results vary. We cannot control how fellow researchers carry out their evaluation, but if we have an idea of the variations that typically occur within a system, we can better compare approaches for which not all details are known. 3 WordNet Similarity Measures Patwardhan and Pedersen (2006) and Pedersen (2010) present studies where the output of a variety of WordNet similarity and relatedness measures are compared. They rank Miller and Charles (1991)’s set (henceforth “mc-set”) of 30 word pairs according to their semantic relatedness with several WordNet similarity measures. Each measure ranks the mc-set of word pairs and these outputs are compared to Miller and Charles (1991)’s gold standard based on human rankings using the Spearman’s Correlation Coefficient (Spearman, 1904, ρ). Pedersen (2010) also ranks the original set of 65 word pairs ranked by humans in an experiment by Rubenstein and Goodenough (1965) (rg-set) which is a superset of Miller and Charles’s set. 3.1 Replication Attempts This research emerged from a project running a similar experiment for Dutch on Cornetto (Vossen et al., 2013). First, an attempt was made to reproduce the results reported in Patwardhan and Pedersen (2006) and Pedersen (2010) on the English WordNet using their WordNet::Similarity web-interface.3 Results differed from those reported in the aforementioned works, even when using the same versions as the original, WordNet::Similarity-1.02 and WordNet 2.1 (Patwardhan and Pedersen, 2006) and WordNet::Similarity-2.05 and WordNet 3.0 (Pedersen, 2010), respectively.4 The fact that results of similarity measures on WordNet can differ even while the same software and same versions are used indicates that properties which are not addressed in the literature may influence the output of similarity measures. We therefore conducted a range of experiments that, in addition to searching for the right settings to replicate results of previous research, address the following questions: 1) Which properties have an impact on the performance of WordNet similarity measures? 2) How much does the performance of individual measures vary? 3) How do commonly used measures compare when the variation of their performance are taken into account? 3.2 Methodology and first observations The questions above were addressed in two stages. In the first stage, Fokkens, who was not involved in the first replication attempt implemented a script to calculate similarity measures using WordNet::Similarity. This included similarity measures introduced by Wu and Palmer (1994) (wup), 3Obtained from http://talisker.d.umn.edu/ cgi-bin/similarity/similarity.cgi, WordNet::Similarity version 2.05. This web interface has now moved to http://maraca.d.umn.edu 4WordNet::Similarity were obtained http:// search.cpan.org/dist/WordNet-Similarity/. 1693 Leacock and Chodorow (1998) (lch), Resnik (1995) (res), Jiang and Conrath (1997) (jcn), Lin (1998) (lin), Banerjee and Pedersen (2003) (lesk), Hirst and St-Onge (1998) (hso) and Patwardhan and Pedersen (2006) (vector and vpairs) respectively. Consequently, settings and properties were changed systematically and shared with Pedersen who attempted to produce the new results with his own implementations. First, we made sure that the script implemented by Fokkens could produce the same WordNet similarity scores for each individual word pair as those used to calculate the ranking on the mc-set by Pedersen (2010). Finally, the gold standard and exact implementation of the Spearman ranking coefficient were compared. Differences in results turned out to be related to variations in the experimental setup. First, we made different assumptions on the restriction of part-of-speech tags (henceforth “PoS-tag”) considered in the comparison. Miller and Charles (1991) do not discuss how they deal with words with more than one PoS-tag in their study. Pedersen therefore included all senses with any PoStag in his study. The first replication attempt had restricted PoS-tags to nouns based on the idea that most items are nouns and subjects would be primed to primarily think of the noun senses. Both assumptions are reasonable. Pos-tags were not restricted in the second replication attempt, but because of a bug in the code only the first identified PoS-tag (“noun” in all cases) was considered. We therefore mistakenly assumed that PoS-tag restrictions did not matter until we compared individual scores between Pedersen and the replication attempts. Second, there are two gold standards for the Miller and Charles (1991) set: one has the scores assigned during the original experiment run by Rubenstein and Goodenough (1965), the other has the scores assigned during Miller and Charles (1991)’s own experiment. The ranking correlation between the two sets is high, but they are not identical. Again, there is no reason why one gold standard would be a better choice than the other, but in order to replicate results, it must be known which of the two was used. Third, results changed because of differences in the treatment of ties while calculating Spearman ρ. The influence of the exact gold standard and calculation of Spearman ρ could only be found because Pedersen could promeasure Spearman ρ Kendall τ ranking min max min max variation path based similarity path 0.70 0.78 0.55 0.62 1-8 wup 0.70 0.79 0.53 0.61 1-6 lch 0.70 0.78 0.55 0.62 1-7 path based information content res 0.65 0.75 0.26 0.57 4-11 lin 0.49 0.73 0.36 0.53 6-10 jcn 0.46 0.73 0.32 0.55 5, 7-11 path based relatedness hso 0.73 0.80 0.36 0.41 1-3,5-10 dictionary and corpus based relatedness vpairs 0.40 0.70 0.26 0.50 7-11 vector 0.48 0.92 0.33 0.76 1,2,4,6-11 lesk 0.66 0.83 -0.02 0.61 1-8,11,12 Table 1: Variation WordNet measures’ results vide the output of the similarity measures he used to calculate the coefficient. It is unlikely we would have been able to replicate his results at all without the output of this intermediate step. Finally, results for lch, lesk and wup changed according to measure specific configuration settings such as including a PoS-tag specific root node or turning on normalisation. In the second stage of this research, we ran experiments that systematically manipulate the influential factors described above. In this experiment, we included both the mc-set and the complete rgset. The implementation of Spearman ρ used in Pedersen (2010) assigned the lowest number in ranking to ties rather than the mean, resulting in an unjustified drop in results for scores that lead to many ties. We therefore experimented with a different correlation measure, Kendall tau coefficient (Kendall, 1938, τ) rather than two versions of Spearman ρ. 3.3 Variation per measure All measures varied in their performance. The complete outcome of our experiments (both the similarity measures assigned to each pair as well as the output of the ranking coefficients) are included in the data set provided at http://github.com/antske/ WordNetSimilarity. Table 1 presents an overview of the main point we wish to make through this experiment: the minimal and maximal results according to both ranking coefficients. Results for similarity measures varied from 0.060.42 points for Spearman ρ and from 0.05-0.60 points for Kendall τ. The last column indicates the variation of performance of a measure 1694 compared to the other measures, where 1 is the best performing measure and 12 is the worst.5 For instance, path has been best performing measure, second best, eighth best and all positions in between, vector has ranked first, second and fourth, but also occupied all positions from six to eleven. In principle, it is to be expected that numbers are not exactly the same while evaluating against a different data set (the mc-set versus the rg-set), taking a different set of synsets to evaluate on (changing PoS-tag restrictions) or changing configuration settings that influence the similarity score. However, a variation of up to 0.44 points in Spearman ρ and 0.60 in Kendall τ 6 leads to the question of how indicative these results really are. A more serious problem is the fact that the comparative performance of individual measure changes. Which measure performs best depends on the evaluation set, ranking coefficient, PoS-tag restrictions and configuration settings. This means that the answer to the question of which similarity measure is best to mimic human similarity scores depends on aspects that are often not even mentioned, let alone systematically compared. 3.4 Variation per category For each influential category of experimental variation, we compared the variation in Spearman ρ and Kendall τ, while similarity measure and other influential categories were kept stable. The categories we varied include WordNet and WordNet::Similarity version, the gold standard used to evaluate, restrictions on PoS-tags, and measure specific configurations. Table 2 presents the maximum variation found across measures for each category. The last column indicates how often the ranking of a specific measure changed as the category changed, e.g. did the measure ranking third using specific configurations, PoS-tag restrictions and a specific gold standard using WordNet 2.1 still rank third when WordNet 3.0 was used instead? The number in parentheses next to the ‘different ranks’ in the table presents the total number of scores investigated. Note that this number changes for each category, because we com5Some measures ranked differently as their individual configuration settings changed. In these cases, the measure was included in the overall ranking multiple times, which is why there are more ranking positions than measures. 6Section 3.4 explains why the variation in Kendall is this extreme and ρ is more appropriate for this task. Variation Maximum difference Different Spearman ρ Kendall τ rank (tot) WN version 0.44 0.42 223 (252) gold standard 0.24 0.21 359 (504) PoS-tag 0.09 0.08 208 (504) configuration 0.08 0.60 37 (90) Table 2: Variations per category pared two WordNet versions (WN version), three gold standard and PoS-tag restriction variations and configuration only for the subset of scores where configuration matters. There are no definite statements to make as to which version (Patwardhan and Pedersen (2006) vs Pedersen (2010)), PoS-tag restriction or configuration gives the best results. Likewise, while most measures do better on the smaller data set, some achieve their highest results on the full set. This is partially due to the fact that ranking coefficients are sensitive to outliers. In several cases where PoS-tag restrictions led to different results, only one pair received a different score. For instance, path assigns a relatively high score to the pair chord-smile when verbs are included, because the hierarchy of verbs in WordNet is relatively flat. This effect is not observed in wup and lch which correct for the depth of the hierarchy. On the other hand, res, lin and jcn score better on the same set when verbs are considered, because they cannot detect any relatedness for the pair crane-implement when restricted to nouns. On top of the variations presented above, we notice a discrepancy between the two coefficients. Kendall τ generally leads to lower coefficiency scores than Spearman ρ. Moreover, they each give different relative indications: where lesk achieves its highest Spearman ρ, it has an extremely low Kendall τ of 0.01. Spearman ρ uses the difference in rank as its basis to calculate a correlation, where Kendall τ uses the number of items with the correct rank. The low Kendall τ for lesk is the result of three pairs receiving a score that is too high. Other pairs that get a relatively accurate score are pushed one place down in rank. Because only items that receive the exact same rank help to increase τ, such a shift can result in a drastic drop in the coefficient. In our opinion, Spearman ρ is therefore preferable over Kendall τ. We included τ, because many authors do not mention the ranking coefficient they use (cf. Budanitsky and Hirst (2006), Resnik (1995)) and both ρ and τ are com1695 monly used coefficients. Except for WordNet, which Budanitsky and Hirst (2006) hold accountable for minor variations in a footnote, the influential categories we investigated in this paper, to our knowledge, have not yet been addressed in the literature. Cramer (2008) points out that results from WordNet-Human similarity correlations lead to scattered results reporting variations similar to ours, but she compares studies using different measures, data and experimental setup. This study shows that even if the main properties are kept stable, results vary enough to change the identity of the measure that yields the best performance. Table 1 reveals a wide variation in ranking relative to alternative approaches. Results in Table 2 show that it is common for the ranking of a score to change due to variations that are not at the core of the method. This study shows that it is far from clear how different WordNet similarity measures relate to each other. In fact, we do not know how we can obtain the best results. This is particularly challenging, because the ‘best results’ may depend on the intended use of the similarity scores (Meng et al., 2013). This is also the reason why we presented the maximum variation observed, rather than the average or typical variation (mostly below 0.10 points). The experiments presented in this paper resulted in a vast amount of data. An elaborate analysis of this data is needed to get a better understanding of how measures work and why results vary to such an extent. We leave this investigation to future work. If there is one takehome message from this experiment, it is that one should experiment with parameters such as restrictions on PoS-tags or configurations and determine which score to use depending on what it is used for, rather than picking something that did best in a study using different data for a different task and may have used a different version of WordNet. 4 Reproducing a NER method Freire et al. (2012) describe an approach to classifying named entities in the cultural heritage domain. The approach is based on the assumption that domain knowledge, encoded in complex features, can aid a machine learning algorithm in NER tasks when only little training data is available. These features include information about person and organisation names, locations, as well as PoS-tags. Additionally, some general features are used such as a window of three preceding and two following tokens, token length and capitalisation information. Experiments are run in a 10-fold cross-validation setup using an open source machine learning toolkit (McCallum, 2002). 4.1 Reproducing NER Experiments This experiment can be seen as a real-world case of the sad tale of the Zigglebottom tagger (Pedersen, 2008). The (fictional) Zigglebottom tagger is a tagger with spectacular results that looks like it will solve some major problems in your system. However, the code is not available and a new implementation does not yield the same results. The original authors cannot provide the necessary details to reproduce their results, because most of the work has been done by a PhD student who has finished and moved on to something else. In the end, the newly implemented Zigglebottom tagger is not used, because it does not lead to the promised better results and all effort went to waste. Van Erp was interested in the NER approach presented in Freire et al. (2012). Unfortunately, the code could not be made available, so she decided to reimplement the approach. Despite feedback from Freire about particular details of the system, results remained 20 points below those reported in Freire et al. (2012) in overall F-score (Van Erp and Van der Meij, 2013). The reimplementation process involved choices about seemingly small details such as rounding to how many decimals, how to tokenise or how much data cleanup to perform (normalisation of non-alphanumeric characters for example). Trying different parameter combinations for feature generation and the algorithm never yielded the exact same results as Freire et al. (2012). The results of the best run in our first reproduction attempt, together with the original results from Freire et al. (2012) are presented in Table 3. Van Erp and Van der Meij (2013) provide an overview of the implementation efforts. 4.2 Following up from reproduction Since the experiments in Van Erp and Van der Meij (2013) introduce several new research questions regarding the influence of data cleaning and the limitations of the dataset, we performed some additional experiments. First, we varied the tokenisation, removing nonalphanumeric characters from the data set. This yielded a significantly smaller data set (10,442 1696 (Freire et al., 2012) results Van Erp and Van der Meij’s replication results Precision Recall Fβ=1 Precision Recall Fβ=1 LOC (388) 92% 55% 69 77.80% 39.18% 52.05 ORG (157) 90% 57% 70 65.75% 30.57% 41.74 PER (614) 91% 56% 69 73.33% 37.62% 49.73 Overall (1,159) 91% 55% 69 73.33% 37.19% 49.45 Table 3: Precision, recall and Fβ=1 scores for the original experiments from Freire et al. 2012 and our replication of their approach as presented in Van Erp and Van der Meij (2013) tokens vs 12,510), and a 15 point drop in overall F-score. Then, we investigated whether variation in the cross-validation splits made any difference as we noticed that some NEs were only present in particular fields in the data, which can have a significant impact on a small dataset. We inspected the difference between different crossvalidation folds by computing the standard deviations of the scores and found deviations of up to 25 points in F-score between the 10 splits. In the general setup, database records were randomly distributed over the folds and cut off to balance the fold sizes. In a different approach to dividing the data by distributing individual sentences from the records over the folds, performance increases by 8.57 points in overall F-score to 58.02. This is not what was done in the original Freire et al. (2012) paper, but shows that the results obtained with this dataset are quite fragile. As we worried about the complexity of the feature set relative to the size of the data set, we deviated somewhat from Freire et al. (2012)’s experiments in that we switched some features on and off. Removal of complex features pertaining to the window around the focus token improved our results by 3.84 points in overall F-score to 53.39. The complex features based on VIAF,7 GeoNames8 and WordNet do contribute to the classification in the Mallet setup as removing them and only using the focus token, window and generic features causes a slight drop in overall F-score from 49.45 to 47.25. When training the Stanford NER system (Finkel et al., 2005) on just the tokens from the Freire data set and the parameters from english.all.3class.distsim.prop (included in the Stanford NER release, see also Van Erp and Van der Meij (2013)), our F-scores come very close to those reported by Freire et al. (2012), but mostly with a higher recall and lower precision. It is puzzling that the Stanford system obtains such high 7http://www.viaf.org 8http://www.geonames.org results with only very simple features, whereas for Mallet the complex features show improvement over simpler features. This leads to questions about the differences between the CRF implementations and the influence of their parameters, which we hope to investigate in future work. 4.3 Reproduction difficulties explained Several reasons may be the cause of why we fail to reproduce results. As mentioned, not all resources and data were available for this experiment, thus causing us to navigate in the dark as we could not reverse-engineer intermediate steps, but only compare to the final precision, recall and F-scores. The experiments follow a general machine learning setup consisting roughly of four steps: preprocess data, generate features, train model and test model. The novelty and replication problems lie in the first three steps. How the data was preprocessed is a major factor here. The data set consisted of XML files marked up with inline named entity tags. In order to generate machine learning features, this data has to be tokenised, possibly cleaned up and the named entity markup had to be converted to a token-based scheme. Each of these steps can be carried out in several ways, and choices made here can have great influence on the rest of the pipeline. Similar choices have to be made for preprocessing external resources. From the descriptions in the original paper, it is unclear how records in VIAF and GeoNames were preprocessed, or even which versions of these resources were used. Preprocessing and calculating occurrence statistics over VIAF takes 30 hours for each run. It is thus not feasible to identify the main potential variations without the original data to verify this prepatory step. Numbers had to be rounded when generating the features, leading to the question of how many decimals are required to be discriminative without creating an overly sparse dataset. Freire recalls that encoding features as multi-value discrete fea1697 tures versus several boolean features can have significant impact. These settings are not mentioned in the paper, making reproduction very difficult. As the project in which the original research was performed has ended, and there is no central repository where such information can be retrieved, we are left to wonder how to reuse this approach in order to further domain-specific NER. 5 Observations In this section, we generalise the observations from our use cases to the main categories that can influence reproduction. Despite our efforts to describe our systems as clearly as possible, details that can make a tremendous difference are often omitted in papers. It will be no surprise to researchers in the field that preprocessing of data can make or break an experiment. The choice of which steps we perform, and how each of these steps is carried out exactly are part of our experimental setup. A major difference in the results for the NER experiments was caused by variations in the way in which we split the data for cross-validation. As we fine-tune our techniques, software gets updated, data sets are extended or annotation bugs are fixed. In the WordNet experiment, we found that there were two different gold standard data sets. There are also different versions of WordNet, and the WordNet::Similarity packages. Similarly for the NER experiment, GeoNames, VIAF and Mallet are updated regularly. It is therefore critical to pay attention to versioning. Our experiments often consist of several different steps whose outputs may be difficult to retrace. In order to check the output of a reproduction experiment at every step of the way, system output of experiments, including intermediate steps, is vital. The WordNet replication was only possible, because Pedersen could provide the similarity scores of each word pair. This enabled us to compare the intermediate output and identify the source of differences in output. Lastly, there may be inherent system variations in the techniques used. Machine learning algorithms may for instance use coin flips in case of a tie. This was not observed in our experiments, but such variations may be determined by running an experiment several times and taking the average over the different runs (cf. Raeder et al. (2010)). All together, these observations show that sharing data and software play a key role in gaining insight into how our methods work. Vanschoren et al. (2012) propose a setup that allows researchers to provide their full experimental setup, which should include exact steps followed in preprocessing the data, documentation of the experimental setup, exact versions of the software and resources used and experimental output. Having access to such a setup allows other researchers to validate research, but also tweak the approach to investigate system variation, systematically test the approach in order to learn its limitations and strengths and ultimately improve on it. 6 Discussion Many of the aspects addressed in the previous section such as preprocessing are typically only mentioned in passing, or not at all. There is often not enough space to capture all details, and they are generally not the core of the research described. Still, our use cases have shown that they can have a tremendous impact on reproduction, and can even lead to different conclusions. This leads to serious questions on how we can interpret our results and how we can compare the performance of different methods. Is an improvement of a few per cent really due to the novelty of the approach if larger variations are found when the data is split differently? Is a method that does not quite achieve the highest reported state-of-the-art result truly less good? What does a state-of-the-art result mean if it is only tested on one data set? If one really wants to know whether a result is better or worse than the state-of-the-art, the range of variation within the state-of-the-art must be known. Systematic experiments such as the ones we carried out for WordNet similarity and NER, can help determine this range. For results that fall within the range, it holds that they can only be judged by evaluations going beyond comparing performance numbers, i.e. an evaluation of how the approach achieves a given result and how that relates to alternative approaches. Naturally, our use cases do not represent the entire gamut of research methodologies and problems in the NLP community. However, they do represent two core technologies and our observations align with previous literature on replication and reproduction. Despite the systematic variation we employed 1698 in our experiments, they do not answer all questions that the problems in reproduction evoked. For the WordNet experiments, deeper analysis is required to gain full understanding of how individual influential aspects interact with each measurement. For the NER experiments, we are yet to identify the cause of our failure to reproduce. The considerable time investment required for such experiments forms a challenge. Due to pressure to publish or other time limitations, they cannot be carried out for each evaluation. Therefore, it is important to share our experiments, so that other researchers (or students) can take this up. This could be stimulated by instituting reproduction tracks in conferences, thus rewarding systematic investigation of research approaches. It can also be aided by adopting initiatives that enable authors to easily include data, code and/or workflows with their publications such as the PLOS/figshare collaboration.9 We already do a similar thing for our research problems by organising challenges or shared tasks, why not extend this to systematic testing of our approaches? 7 Conclusion We have presented two reproduction use cases for the NLP domain. We show that repeating other researchers’ experiments can lead to new research questions and provide new insights into and better understanding of the investigated techniques. Our WordNet experiments show that the performance of similarity measures can be influenced by the PoS-tags considered, measure specific variations, the rank coefficient and the gold standard used for comparison. We not only find that such variations lead to different numbers, but also different rankings of the individual measures, i.e. these aspects lead to a different answer to the question as to which measure performs best. We did not succeed in reproducing the NER results of Freire et al. (2012), showing the complexity of what seems a straightforward reproduction case based on a system description and training data only. Our analyses show that it is still an open question whether additional complex features improve domain specific NER and that this may partially depend on the CRF implementation. Some observations go beyond our use cases. In particular, the fact that results vary significantly 9http://blogs.plos.org/plos/2013/01/ easier-access-to-plos-data/ because of details that are not made explicit in our publications. Systematic testing can provide an indication of this variation. We have classified relevant aspects in five categories occurring across subdisciplines of NLP: preprocessing, experimental setup, versioning, system output, and system variation. We believe that knowing the influence of different aspects in our experimental workflow can help increase our understanding of the robustness of the approach at hand and will help understand the meaning of the state-of-the-art better. Some techniques are reused so often (the papers introducing WordNet similarity measures have around 1,0002,000 citations each as of February 2013, for example) that knowing their strengths and weaknesses is essential for optimising their use. As mentioned many times before, sharing is key to facilitating reuse, even if the code is imperfect and contains hacks and possibly bugs. In the end, the same holds for software as for documentation: it is like sex: if it is good, it is very good and if it is bad, it is better than nothing!10 But most of all: when reproduction fails, regardless of whether original code or a reimplementation was used, valuable insights can emerge from investigating the cause of this failure. So don’t let your failing reimplementations of the Zigglebottom tagger collect dusk on a shelf while others reimplement their own failing Zigglebottoms. As a community, we need to know where our approaches fail, as much –if not more– as where they succeed. Acknowledgments We would like to thank the anonymous reviewers for their eye to detail and useful comments to make this a better paper. We furthermore thank Ruben Izquierdo, Lourens van der Meij, Christoph Zwirello, Rebecca Dridan and the Semantic Web Group at VU University for their help and useful feedback. The research leading to this paper was supported by the European Union’s 7th Framework Programme via the NewsReader Project (ICT-316404), the Agora project, by NWO CATCH programme, grant 640.004.801, and the BiographyNed project, a joint project with Huygens/ING Institute of the Dutch Academy of Sciences funded by the Netherlands eScience Center (http://esciencecenter.nl/). 10The documentation variant of this quote is attributed to Dick Brandon. 1699 References Stanjeev Banerjee and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pages 805– 810, Acapulco, August. Daniel M. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30(4):479–511. Tomasz Buchert and Lucas Nussbaum. 2012. Leveraging business workflows in distributed systems research for the orchestration of reproducible and scalable experiments. In Anne Etien, editor, 9`eme ´edition de la conf´erence MAnifestation des JEunes Chercheurs en Sciences et Technologies de l’Information et de la Communication - MajecSTIC 2012 (2012). Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based measures of lexical semantic relatedness. Computational Linguistics, 32(1):13– 47. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Phd dissertation, University of Pennsylvania. Irene Cramer. 2008. How well do semantic relatedness measures perform? a meta-study. In Semantics in Text Processing. STEP 2008 Conference Proceedings, volume 1, pages 59–70. Olivier Dalle. 2012. On reproducibility and traceability of simulations. In WSC-Winter Simulation Conference-2012. Chris Drummond. 2009. Replicability is not reproducibility: nor is it good science. In Proceedings of the Twenty-Sixth International Conference on Machine Learning: Workshop on Evaluation Methods for Machine Learning IV. Jenny Finkel, Trond Grenager, and Christopher D. Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 363–370, Ann Arbor, USA. Nuno Freire, Jos´e Borbinha, and P´avel Calado. 2012. An approach for named entity recognition in poorly structured data. In Proceedings of ESWC 2012. Graeme Hirst and David St-Onge. 1998. Lexical chains as representations of context for the detection and correction of malapropisms. In C. Fellbaum, editor, WordNet: An electronic lexical database, pages 305–332. MIT Press. James Howison and James D. Herbsleb. 2013. Sharing the spoils: incentives and collaboration in scientific software development. In Proceedings of the 2013 conference on Computer Supported Cooperative Work, pages 459–470. Darrel C. Ince, Leslie Hatton, and John GrahamCumming. 2012. The case for open computer programs. Nature, 482(7386):485–488. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the International Conference on Research in Computational Linguistics (ROCLING X), pages 19–33, Taiwan. Maurice Kendall. 1938. A new measure of rank correlation. Biometrika, 30(1-2):81–93. Claudia Leacock and Martin Chodorow. 1998. Combining local context and WordNet similarity for word sense identification. In C. Fellbaum, editor, WordNet: An electronic lexical database, pages 265–283. MIT Press. Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning, pages 296—304, Madison, USA. Panos Louridas and Georgios Gousios. 2012. A note on rigour and replicability. SIGSOFT Softw. Eng. Notes, 37(5):1–4. Andrew K. McCallum. 2002. MALLET: A machine learning for language toolkit. http://mallet. cs.umass.edu. Thilo Mende. 2010. Replication of defect prediction studies: problems, pitfalls and recommendations. In Proceedings of the 6th International Conference on Predictive Models in Software Engineering. ACM. Lingling Meng, Runqing Huang, and Junzhong Gu. 2013. A review of semantic similarity measures in wordnet. International Journal of Hybrid Information Technology, 6(1):1–12. George A. Miller and Walter G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. Cameron Neylon, Jan Aerts, C Titus Brown, Simon J Coles, Les Hatton, Daniel Lemire, K Jarrod Millman, Peter Murray-Rust, Fernando Perez, Neil Saunders, Nigam Shah, Arfon Smith, Ga¨el Varoquaux, and Egon Willighagen. 2012. Changing computational research. the challenges ahead. Source Code for Biology and Medicine, 7(2). Siddharth Patwardhan and Ted Pedersen. 2006. Using wordnet based context vectors to estimate the semantic relatedness of concepts. In Proceedings of the EACL 2006 Workshop Making Sense of Sense Bringing Computational Linguistics and Psycholinguistics Together, pages 1–8, Trento, Italy. Ted Pedersen. 2008. Empiricism is not a matter of faith. Computational Linguistics, 34(3):465–470. 1700 Ted Pedersen. 2010. Information content measures of semantic similarity perform better without sensetagged text. In Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2010), pages 329–332, Los Angeles, USA. Troy Raeder, T. Ryan Hoens, and Nitesh V. Chawla. 2010. Consequences of variability in classifier performance estimates. In Proceedings of ICDM’2010. Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI), pages 448–453, Montreal, Canada. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Wheeler Ruml. 2010. The logic of benchmarking: A case against state-of-the-art performance. In Proceedings of the Third Annual Symposium on Combinatorial Search (SOCS-10). Charles Spearman. 1904. Proof and measurement of association between two things. American Journal of Psychology, 15:72—101. Marieke Van Erp and Lourens Van der Meij. 2013. Reusable research? a case study in named entity recognition. CLTL 2013-01, Computational Lexicology & Terminology Lab, VU University Amsterdam. Joaquin Vanschoren, Hendrik Blockeel, Bernhard Pfahringer, and Geoffrey Holmes. 2012. Experiment databases. Machine Learning, 87(2):127–158. Piek Vossen, Isa Maks, Roxane Segers, Hennie van der Vliet, Marie-Francine Moens, Katja Hofmann, Erik Tjong Kim Sang, and Maarten de Rijke. 2013. Cornetto: a Combinatorial Lexical Semantic Database for Dutch. In Peter Spyns and Jan Odijk, editors, Essential Speech and Language Technology for Dutch Results by the STEVIN-programme, number XVII in Theory and Applications of Natural Language Processing, chapter 10. Springer. Zhibiao Wu and Martha Palmer. 1994. Verb semantics and lexical selection. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 133—138, Las Cruces, USA. 1701
2013
166
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1702–1712, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Evaluating Text Segmentation using Boundary Edit Distance Chris Fournier University of Ottawa Ottawa, ON, Canada [email protected] Abstract This work proposes a new segmentation evaluation metric, named boundary similarity (B), an inter-coder agreement coefficient adaptation, and a confusion-matrix for segmentation that are all based upon an adaptation of the boundary edit distance in Fournier and Inkpen (2012). Existing segmentation metrics such as Pk, WindowDiff, and Segmentation Similarity (S) are all able to award partial credit for near misses between boundaries, but are biased towards segmentations containing few or tightly clustered boundaries. Despite S’s improvements, its normalization also produces cosmetically high values that overestimate agreement & performance, leading this work to propose a solution. 1 Introduction Text segmentation is the task of splitting text into segments by placing boundaries within it. Segmentation is performed for a variety of purposes and is often a pre-processing step in a larger task. E.g., text can be topically segmented to aid video and audio retrieval (Franz et al., 2007), question answering (Oh et al., 2007), subjectivity analysis (Stoyanov and Cardie, 2008), and even summarization (Haghighi and Vanderwende, 2009). A variety of segmentation granularities, or atomic units, exist, including segmentations at the morpheme (e.g., Sirts and Alum¨ae 2012), word (e.g., Chang et al. 2008), sentence (e.g., Reynar and Ratnaparkhi 1997), and paragraph (e.g., Hearst 1997) levels. Between each atomic unit lies the potential to place a boundary. Segmentations can also represent the structure of text as being organized linearly (e.g., Hearst 1997), hierarchically (e.g., Eisenstein 2009), etc. Theoretically, segmentations could also contain varying boundary types, e.g., two boundary types could differentiate between act and scene breaks in a play. Because of its value to natural language processing, various text segmentation tasks have been automated such as topical segmentation— for which a variety of automatic segmenters exist (e.g., Hearst 1997, Malioutov and Barzilay 2006, Eisenstein and Barzilay 2008, and Kazantseva and Szpakowicz 2011). This work addresses how to best select an automatic segmenter and which segmentation metrics are most appropriate to do so. To select an automatic segmenter for a particular task, a variety of segmentation evaluation metrics have been proposed, including Pk (Beeferman and Berger, 1999, pp. 198–200), WindowDiff (WD; Pevzner and Hearst 2002, p. 10), and most recently Segmentation Similarity (S; Fournier and Inkpen 2012, p. 154–156). Each of these metrics have a variety of flaws: Pk and WindowDiff both under-penalize errors at the beginning of segmentations (Lamprier et al., 2007) and have a bias towards favouring segmentations with few or tightly-clustered boundaries (Niekrasz and Moore, 2010), while S produces overly optimistic values due to its normalization (shown later). To overcome the flaws of existing text segmentation metrics, this work proposes a new series of metrics derived from an adaptation of boundary edit distance (Fournier and Inkpen, 2012, p. 154– 156). This new metric is named boundary similarity (B). A confusion matrix to interpret segmentation as a classification problem is also proposed, allowing for the computation of information retrieval (IR) metrics such as precision and recall.1 In this work: §2 reviews existing segmentation metrics; §3 proposes an adaptation of boundary edit distance, a new normalization of it, a new confusion matrix for segmentation, and an inter1An implementation of boundary edit distance, boundary similarity, B-precision, and B-recall, etc. is provided at http://nlp.chrisfournier.ca/ 1702 coder agreement coefficient adaptation; §4 compares existing segmentation metrics to those proposed herein; §5 evaluates S and B based intercoder agreement; and §6 compares B, S, and WD while evaluating automatic segmenters. 2 Related Work 2.1 Segmentation Evaluation Many early studies evaluated automatic segmenters using information retrieval (IR) metrics such as precision, recall, etc. These metrics looked at segmentation as a binary classification problem and were very harsh in their comparisons—no credit was awarded for nearly missing a boundary. Near misses occur frequently in segmentation— although manual coders often agree upon the bulk of where segment lie, they frequently disagree upon the exact position of boundaries (Artstein and Poesio, 2008, p. 40). To attempt to overcome this issue, both Passonneau and Litman (1993) and Hearst (1993) conflated multiple manual segmentations into one that contained only those boundaries which the majority of coders agreed upon. IR metrics were then used to compare automatic segmenters to this majority solution. Such a majority solution is unsuitable, however, because it does not contain actual subtopic breaks, but instead the conflation of a collection of potentially disagreeing solutions. Additionally, the definition of what constitutes a majority is subjective (e.g., Passonneau and Litman (1993, p. 150), Litman and Passonneau (1995), Hearst (1993, p. 6) each used 4/7, 3/7, and > 50%, respectively). To address the issue of awarding partial credit for an automatic segmenter nearly missing a boundary—without conflating segmentations, Beeferman and Berger (1999, pp. 198–200) proposed a new metric named Pk. Pevzner and Hearst (2002, pp. 3–4) explain Pk well: a window of size k—where k is half of the mean manual segmentation length—is slid across both automatic and manual segmentations. A penalty is awarded if the window’s edges are found to be in differing or the same segments within the manual segmentation and the automatic segmentation disagrees. Pk is the sum of these penalties over all windows. Measuring the proportion of windows in error allows Pk to penalize a fully missed boundary by k windows, whereas a nearly missed boundary is penalized by the distance that it is offset. Pk was not without issue, however. Pevzner and Hearst (2002, pp. 5–10) identified that Pk: i) penalizes false negatives (FNs)2 more than false positives (FPs); ii) does not penalize full misses within k units of a reference boundary; iii) penalize near misses too harshly in some situations; and iv) is sensitive to internal segment size variance. To solve Pk’s issues, Pevzner and Hearst (2002, pp. 10) proposed a modification referred to as WindowDiff (WD). Its major difference is in how it decides to penalized windows: within a window, if the number of boundaries in the manual segmentation (Mij) differs from the number of boundaries in the automatic segmentation (Aij), then a penalty is given. The ratio of penalties over windows then represents the degree of error between the segmentations, as in Equation 1. This change better allowed WD to: i) penalize FPs and FNs more equally;3 ii) Not skip full misses; iii) Less harshly penalize near misses; and iv) Reduce its sensitivity to internal segment size variance. WD(M, A) = 1 N −k N−k X i=1,j=i+k (|Mij −Aij| > 0) (1) WD did not, however, solve all of the issues related to window-based segmentation comparison. WD, and inherently Pk: i) Penalize errors less at the beginning and end of segmentations (Lamprier et al., 2007); ii) Are biased towards favouring automatic segmentations with either few or tightly-clustered boundaries (Niekrasz and Moore, 2010); iii) Calculate window size k inconsistently;4 iv) Are not symmetric5 (meaning that they cannot be used to produce a pairwise mean of multiple manual segmentations6). Segmentation Similarity (S; Fournier and Inkpen 2012, pp. 154–156) took a different approach to comparing segmentations. Instead of using windows, the work proposes a new restricted edit distance called boundary edit distance which differentiates between full and near misses. S then 2I.e., a boundary present in the manual but not the automatic segmentation, and the reverse for a false positive. 3Georgescul et al. (2006, p. 48) noted that WD interprets a near miss as a FP probabilistically more than as a FN. 4k must be an integer, but half of a mean may be a fraction, thus rounding must be used, but no rounding method is specified. It is also not specified whether k should be set once during a study or recalculated for each comparison— this work assumes the latter. 5Window size is calculated only upon the manual segmentation, meaning that one must be a manual and other an automatic segmentation. 6This also means that WD and Pk cannot be adapted to compute inter-coder agreement coefficients. 1703 normalizes the counts of full and near misses identified by boundary edit distance, as shown in Equation 2, where sa and sb are the segmentations, nt is the maximum distance that boundaries may span to be considered a near miss, edits(sa, sb, nt) is the edit distance, and pb(D) is the number of potential boundaries in a document D (pb(D) = |D| −1). S(sa, sb, nt) = 1 −|edits(sa, sb, nt)| pb(D) (2) Boundary edit distance models full misses as the addition/deletion of a boundary, and near misses as n-wise transpositions. An n-wise transposition is the act of swapping the position of a boundary with an empty position such that it matches a boundary in the segmentation compared against (up to a spanning distance of nt). S also scales the severity of a near miss by the distance over which it is transposed, allowing it to scale the penalty of a near misses much like WD. S is also symmetric, allowing it to be used in pairwise means and inter-coder agreement coefficients. The usage of an edit distance that supported transpositions to compare segmentations was an advancement over window-based methods, but boundary edit distance and its normalization S are not without problems, specifically: i) This edit distance uses string reversals (ABCD =⇒DCBA) to perform transpositions, making it cumbersome to analyse individual pairs of boundaries between segmentations; ii) S is sensitive to variations in the total size of a segmentation, leading it to favour very sparse segmentations with few boundaries; iii) S produces cosmetically high values, making it difficult to interpret and causing over-estimation of inter-coder agreement. In this work, these deficiencies are demonstrated and a new set of metrics are proposed as replacements. 2.2 Inter-Coder Agreement Inter-coder agreement coefficients are used to measure whether a group of human judges (i.e. coders) agree with each other greater than chance. Such coefficients are used to determine the reliability and replicability of the coding scheme and instructions used to collect manual codings (Carletta, 1996). Although direct interpretation of such coefficients is difficult, they are an invaluable tool when comparing segmentation data that has been collected with differing labels and when estimating the replicability of a study. A variety of intercoder agreement coefficients exist, but this work focuses upon a selection of those discussed by Artstein and Poesio (2008), specifically: Scott’s π (Scott, 1955) Fleiss’ multi-π (π∗, Fleiss 1971)7, Cohen’s κ (Cohen, 1960), and multi-κ (κ∗, Davies and Fleiss 1982). Their general forms are shown in Equation 3, where Aa represents actual agreement, and Ae expected (i.e., chance) agreement between coders. κ, π, κ∗, and π∗= Aa −Ae 1 −Ae (3) When calculating agreement between manual segmenters, boundaries are considered labels and their positions the decisions. Unfortunately, because of the frequency of near misses that occur in segmentation, using such labels and decisions causes inter-coder agreement coefficients to drastically underestimate actual agreement— much like how automatic segmenter performance is underestimated when segmentation is treated as a binary classification problem. Hearst (1997, pp. 53–54) attempted to adapt π∗to award partial credit for near misses by using the percentage agreement metric of Gale et al. (1992, p. 254) to compute actual agreement—which conflates multiple manual segmentations together according to whether a majority of coders agree upon a boundary or not. Unfortunately, such a method of computing agreement grossly inflates results, and “the statistic itself guarantees at least 50% agreement by only pairing off coders against the majority opinion” (Isard and Carletta, 1995, p. 63). Fournier and Inkpen (2012, pp. 154–156) proposed using pairwise mean S for actual agreement to allow inter-coder agreement coefficients to award partial credit for near misses. Unfortunately, because S produces cosmetically high values, it also causes inter-coder agreement coefficients to drastically overestimates actual agreement. This work demonstrates this deficiency and proposes and evaluates a solution. 3 A New Proposal for Edit-Based Text Segmentation Evaluation In this section, a new boundary edit distance based segmentation metric and confusion matrix is proposed to solve the deficiencies of S for both segmentation comparison and inter-coder agreement. 7Sometimes referred to as K (Siegel and Castellan, 1988). 1704 3.1 Boundary Edit Distance In this section, Boundary Edit Distance (BED; as proposed in Fournier and Inkpen 2012, pp. 154– 156) is introduced in more detail, and a few terminological and conceptual changes are made. Boundary Edit Distance uses three main edit operations to model segmentation differences: • Additions/deletions (AD; referred to originally as substitutions) for full misses; • Substitutions (S; not shown for brevity) for confusing one boundary type with another; • n-wise transpositions (T) for near misses. These edit operations are symmetric and operate upon the set of boundaries that occur at each potential boundary position in a pair of segmentations. An example of how these edit operations are applied8 is shown in Figure 1, where a near miss (T), a matching pair of boundaries (M), and two full misses (ADs) are shown with the maximum distance that a transposition can span (nt) set to 2 potential boundaries (i.e., only adjacent positions can be transposed). s1 2 4 4 4 s2 3 3 6 2 T M AD AD Figure 1: Boundary edit operations In Figure 1, the location of the errors is clearly shown. Importantly, however, pairs of boundaries between the segmentations can be seen that represent the decisions made, and the correctness of these decisions. Imagine that s1 is a manual segmentation, and s2 is an automatic segmenter’s hypothesis. The transposition is a partially correct decision, or boundary pair. The match is a correct boundary pair. The additions/deletions, however, could be one of two erroneous decisions: to not place an expected boundary (FN), or to place a superfluous boundary (FP).9 This work proposes assigning a correctness score for each boundary pair/decision (shown in Table 1) and then using the mean of this score as a normalization of boundary edit distance. This interpretation intuitively relates boundary edit distance to coder judgements, making it ideal for 8A complete explanation of Boundary Edit Distance is detailed in Fournier (2013, Section 4.1.2). 9Also note that the ADs are close together, and if nt > 2, then they would be considered a T, and not two ADs—this is one way to award partial credit for near misses. calculating actual agreement in inter-coder agreement coefficients and comparing segmentations. Pair Correctness Match 1 Addition/deletion 0 Transposition 1 −wt span(Te, nt) Substitution 1 −ws ord(Se, Tb) Table 1: Correctness of boundary pair 3.2 Boundary Similarity The new boundary edit distance normalization proposed herein is referred to as boundary similarity (B). Assuming that boundary edit distance produces sets of edit operations where Ae is the set of additions/deletions, Te the set of n-wise transpositions, Se the set of substitutions, and BM the set of matching boundary pairs, boundary similarity similarity can be defined as shown in Equation 4— one minus the incorrectness of each boundary pair over the total number of boundary pairs. B(s1, s2, nt) = 1−|Ae| + wt span(Te, nt) + ws ord(Se, Tb) |Ae| + |Te| + |Se| + |BM| (4) This form, one minus a penalty function, was chosen so that it was easier to compare against other penalty functions considered (not shown here for brevity). This normalization was also chosen because it is equivalent to mean boundary pair correctness and so that it ranges in value from 0 to 1. In the worst case, a segmentation comparison will result in no matches, no near misses, no substitutions, and X full misses, i.e., |Ae| = X and all other terms in Equation 4 are zero, meaning that: B = 1 − X + 0 + 0 X + 0 + 0 + 0 = 1 −X/X = 1 −1 = 0 In the best case, a segmentation comparison will result in X matches, no near misses, no substitutions, and no full misses, i.e., |BM| = X and all other terms in Equation 4 are zero, meaning that: B = 1 − 0 + 0 + 0 0 + 0 + 0 + X = 1 −0/X = 1 −0 = 1 For all other scenarios, varying numbers of matches, near misses, substitutions and full misses will result in values of B between 0 and 1. Equation 4 takes two segmentations (in any order), and the maximum transposition spanning distance (nt). This distance represents the greatest offset between boundary positions that could be considered a near miss and can be used to scale 1705 the severity of a near miss. A variety of scaling functions could be used, and this work arbitrarily chooses a simple fraction to represent each transposition’s severity in terms of its distance from its paired boundary over nt plus a constant wt (0 by default), as shown in Equation 5. wt span(Te, nt) = |Te| X j=1  wt + abs(Te[j][1] −Te[j][2]) nt −1  (5) If multiple boundary types are used, then substitution edit operations would occur when one boundary type was confused with another. Assigning each boundary type tb ∈Tb a number on an ordinal scale, substitutions can be weighted by their distance on this scale over the maximum distance plus a constant ws (0 by default), as shown in Equation 6. ws ord(Se, Tb) = |Se| X j=1  ws + abs(Se[j][1] −Se[j][2]) max(Tb) −min(Tb)  (6) These scaling functions allow for edit penalties to range from 0 to ws/t plus some linear distance. 3.3 A Confusion Matrix for Segmentation The mean correctness of each pair (i.e., B) gives an indication of just how similar one segmentation is to another, but what if one wants to identify some specific attributes of the performance of an automatic segmenter? Is the segmenter confusing one boundary type with another, or is it very precise but has poor recall? The answers to these questions can be obtained by looking at text segmentation as a multi-class classification problem. This work proposes using a task’s set of boundary types (Tb) and the lack of a boundary (∅) to represent the set of segmentation classes in a boundary classification problem. Using these classes, a confusion matrix (defined in Equation 7) can be created which sums boundary pair correctness so that information-retrieval metrics can be calculated that award partial credit to near misses by scaling edits operations. CM(a, p) =                      |BM,a| + ws ord(Sa,p e , Tb) +wt span(Ta,p e , nt) if a = p ws ord(Sa,p e , Tb) +wt span(Ta,p e , nt) if a ̸= p |Ae,a| if p = ∅ |Ae,p| if a = ∅ (7) An example confusion matrix is shown in Figure 2 from which IR metrics such as precision, recall, and Fβ-measure can be computed (referred to as B-precision, B-recall, etc.). Actual Predicted B Non-B B CM(1, 1) CM(∅, 1) Non-B CM(1, ∅) CM(∅, ∅) Figure 2: Example confusion matrix (Tb = {1}) 3.4 B-Based Inter-coder Agreement Fournier and Inkpen (2012, p. 156–157) adapted four inter-coder agreement formulations provided by Artstein and Poesio (2008) to use S to award partial credit for near misses, but because S produces cosmetically high agreement values they grossly overestimate agreement. To solve this issue, this work instead proposes using microaverage B (i.e., mean boundary pair correctness over all documents and codings compared) to solve this issue (demonstrated in §5) because it does not over-estimate actual agreement (demonstrated in §4 and 5). 4 Discussion of Segmentation Metrics Before analysing how each metric compares to each other upon a large data set, it would be useful to investigate how they act on a smaller scale. To that end, this section discusses how each metric interprets a set of hypothetical segmentations of an excerpt of a poem by Coleridge (1816, pp. 55–58) titled Kubla Khan (shown in Figure 3)—chosen arbitrarily for its brevity (and beauty). These segmentations are topical and at the line-level. 1. In Xanadu did Kubla Khan 2. A stately pleasure-dome decree: 3. Where Alph, the sacred river, ran 4. Through caverns measureless to man 5. Down to a sunless sea. 6. So twice five miles of fertile ground 7. With walls and towers were girdled round: 8. And here were gardens bright with sinuous rills, 9. Where blossomed many an incense-bearing tree; 10. And here were forests ancient as the hills, 11. Enfolding sunny spots of greenery. Figure 3: Excerpt from the poem Kubla Khan (Coleridge, 1816, pp. 55–58) with line numbers Topical segmentations of this poem are difficult to produce because there is still some structural form (i.e., punctuation) which may dictate where a boundary lies, but the imagery, places, times, and subjects of the poem appear to twist and wind like a vision in a dream. Thus, placing a topical boundary in this text is a highly subjective task. One hypothetical topical segmentation of the excerpt is shown in Figure 4. In this section, a variety of 1706 contrived automatic segmentations are compared to this manual segmentation to illustrate how each metric reacts to different mistakes. Lines Description 1–2 Kubla Khan and his decree 3–5 Waterways 6–11 Fertile ground and greenery Figure 4: A hypothetical manual segmentation Assuming that Figure 4 represents an acceptable manual segmentation (m), how would each metric react to an automatic segmentation (a) that combines the segments 1–2 and 3–5 together? This would represent a full miss, or a false negative, as shown in Figure 5. S interprets these segmentations as being quite similar, yet, the automatic segmentation is missing a boundary. B and 1−WD,10 in this case, better reflect this error. m a S B 1−WD 0.9 0.5 0.77¯7 k = 2 Figure 5: False negative How would each metric react to an automatic segmentation that is very close to placing the boundaries correctly, but makes the slight mistake of thinking that the segment on waterways (3–5) ends a bit too early? This would represent a near miss, as shown in Figure 6. S and 1−WD incorrectly interpret this error as being equivalent to the previous false negative—a troubling result. Segmentation comparison metrics should be able to discern between the full and a near miss shown in these two figures, and an automatic segmenter that nearly misses a boundary should be awarded a better score than one which fully misses a boundary—B recognizes this and awards the near miss a higher score. m a S B 1−WD 0.9 0.75 0.77¯7 k = 2 Figure 6: Near miss How would each metric react to an automatic segmentation that adds an additional boundary between line 8 and 9? This would not be ideal because such a boundary falls in the middle of a cohesive description of a garden, representing 10WD is reported as 1−WD because WD is normally a penalty metric where a value of 0 is ideal, unlike S and B. Additionally, k = 2 for all examples in this section because WD computes k from the manual segmentation m, which does not change in these examples. a full miss, or false positive, as in Figure 7. S and 1−WD incorrectly interpret this error as being equivalent to the previous two errors—an even more troubling result. In this case, there are two matching boundaries and a pair that do not match, which is arguably preferable to the full miss and one match in Figure 5, but not to the match and near miss in Figure 6. B recognizes this, and awards a higher score to this automatic segmenter than that in Figure 5, but below Figure 6. m a S B 1−WD 0.9 0.66¯6 0.77¯7 k = 2 Figure 7: False positive How would each metric react to an automatic segmentation that compensates for its lack of precision by spuriously adding boundaries in clusters around where it thinks that segments should begin or end? This is shown in Figure 8. This kind of behaviour is finally penalized differently by S and 1−WD (unlike the other errors shown in this section), but it only barely results in a dip in their values. B also penalizes this behaviour, but does so much more harshly—in B’s interpretation, this is as egregious as committing a false negative (e.g., Figure 5)—an arguably correct interpretation, if the evaluation desires to maximize similarity with a manual segmentation. m a S B 1−WD 0.8 0.5 0.66¯6 k = 2 Figure 8: Cluster of false positives These short demonstrations of how S, B, and 1−WD interpret error should lead one to conclude that: i) WD can penalize near misses to the same degree as full misses—overly harshly; ii) Both S and WD are not very discriminating when small segments are analysed; and iii) B is the only one of the three metrics that is able to often discriminate between these situations. B, if used to rank these automatic segmenters, would rank them from best to worst performing as: the near miss, false positive, and then a tie between the false negative and cluster of false positives—a reasonable ranking in the context of an evaluation seeking similarity with a manual segmentation. 5 Segmentation Agreement Having a bit more confidence in B compared to S and WD on a small scale from the previous sec1707 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 P(miss) while P(near) = 0.0921 0.75 0.80 0.85 0.90 0.95 1.00 π−value using S 2 3 4 5 6 7 8 9 10 (a) S-based π∗showing increasing full misses with constant near misses 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 P(miss) while P(near) = 0.0921 0.2 0.4 0.6 0.8 1.0 π−value using Bb 2 3 4 5 6 7 8 9 10 (b) B-based π∗showing increasing full misses with constant near misses 2 3 4 5 6 7 8 9 10 Coders (quantity) 0.0 0.2 0.4 0.6 0.8 1.0 π−value using Bb S B (c) S and B based π∗with fully random segmentations Figure 9: Artificial data sets illustrating how π adapted to use either S or B reacts to increasing full misses and random segmentations and varying numbers of coders tion, it makes sense to analyse some larger data sets. Two such data sets are The Stargazer data set collected by Hearst (1997) and The Moonstone data set collected by Kazantseva and Szpakowicz (2012). Both are linear topical segmentations at the paragraph level with only one boundary type, but that is where their similarities end. The Stargazer text is a science magazine article titled “Stargazers look for life” (Baker, 1990) segmented by 7 coders and was one of twelve articles chosen for its length (between 1,800 and 2,500 words) and for having little structural demarcation. “The Moonstone” is a 19th century romance novel by Collins (1868) segmented by 4–6 coders per chapter; of its 23 chapters, 2 were coded in a pilot study and another 20 were coded individually by 27 undergraduate English students in 5 groups. For the Stargazer data set, using S-based π∗, an inter-coder agreement coefficient of 0.7562 is obtained—a reasonable level by content analysis standards. Unfortunately, this value is highly inflated, and B-based π∗gives a much more conservative coefficient at 0.4405. For the Moonstone data set, the agreement coefficients for each group of 4–6 coders using S-based π∗is again overinflated at 0.91, 0.92, 0.90, 0.94, 0.83. B-based π∗instead reports that the coefficients should be 0.20, 0.18, 0.40, 0.38, 0.23. Which of these coefficients should be trusted? Is agreement in these data sets high or low? To help answer that, this work looks at how the coders in the data sets behaved. If the segmenters in the Moonstone data set truly agreed with each other, then they should have all behaved similarly. One measure of coder behaviour is the frequency that they placed boundaries (normalized by their opportunity to place boundaries, i.e. the sum of the potential boundaries in the chapters that each segmented). This normalized frequency is shown per 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Coder 0.00 0.05 0.10 0.15 0.20 0.25 Boundaries per potential boundary Figure 11: Normalized boundaries placed by each coder in the Moonstone data set (with mean±SD) coder in Figure 11 for The Moonstone data set, along with bars indicating the mean and one standard deviation above and below. As can be seen, the coders fluctuated wildly in the frequency with which they placed boundaries—some (e.g., coder 7) to degrees exceeding 2 standard deviations. The Moonstone data set as a whole does not exhibit coders who behaved similarly, supporting the assertion by B-based π∗that these coders do not agree well (though pockets of agreement exist). How can it be demonstrated that S-based agreement over-estimates agreement, and B-based agreement does not? One way to demonstrate this is through simulation. By estimating parameters from the large Moonstone data set such as the distribution of internal segment sizes produced by all coders, a random segmentation of the novel with similar characteristics can be created. From this single random segmentation, other segmentation can be created with a probability of either placing an offset boundary (i.e., a near miss) or placing an extra/omitting a boundary (i.e., a full miss)— a pseudo-coding. Manipulating these probabilities and keeping the probability of a near miss at a constant natural level should produce a slowly declin1708 Random Human BayesSeg APS MinCut Automatic segmenter 0.80 0.82 0.84 0.86 0.88 0.90 0.92 0.94 S −value n = 90 n = 90 n = 90 n = 90 n = 90 (a) S Random Human BayesSeg APS MinCut Artificial Segmenter 0.2 0.3 0.4 0.5 0.6 Bb mean and 95% CIs n = 1057 n = 841 n = 964 n = 738 n = 871 (b) B Random Human BayesSeg APS MinCut Automatic segmenter 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 1−WD−value n = 90 n = 90 n = 90 n = 90 n = 90 (c) 1−WD Figure 10: Mean performance of 5 segmenters using varying metrics with 95% confidence intervals ing amount of agreement which is unaffected by the number of pseudo-coders. This is not apparent, however, for S-based π∗in Figure 9a; as the probability of a full miss increases, agreement appears to rise and varies depending upon the number of pseudo-coders. B-based π∗however shows declining agreement and little to no variation depending upon the number of pseudo-coders, as shown in Figure 9b. If instead of creating pseudo-coders from a random segmentation a series of random segmentations with the same parameters were generated, a properly functioning inter-coder agreement coefficient should report some agreement (due to the similar parameters used to create the segmentations) but it should be quite low. Figure 9c shows this, and that S-based π∗drastically over-estimates what should be very low agreement whereas Bbased π∗properly reports low agreement. From these demonstrations, it is evident that S-based inter-coder agreement coefficients drastically over-estimate agreement, as does S itself in pairwise mean form. B-based coefficients, however, properly discriminate between levels of agreement regardless of the number of coders and do not over-estimate. 6 Evaluation of Automatic Segmenters Having looked at how S, WD, and B perform at a small scale in §4 and on larger data set in §5, this section demonstrates the use of these metrics to evaluate some automatic segmenters. Three automatic segmenters were trained—or had their parameters estimated upon—The Moonstone data set, including MinCut; (Malioutov and Barzilay, 2006), BayesSeg; (Eisenstein and Barzilay, 2008), and APS (Kazantseva and Szpakowicz, 2011). To put this evaluation into context, an upper and lower bound were also created comprised of a random coder from the manual data (Human) and a random segmenter (Random), respectively. These automatic segmenters, and the upper and lower bounds, were created, trained, and run by another researcher (Anna Kazantseva) with their labels removed during the development of the metrics detailed herein (to improve the impartiality of these analyses). An ideal segmentation evaluation metric should, in theory, place the three automatic segmenters between the upper and lower bounds in terms of performance if the metrics, and the segmenters, function properly. The mean performance of the upper and lower bounds upon the test set of the Moonstone data set using S, B, and WD are shown in Figure 10a– 10c along with 95% confidence intervals. Despite the difference in the scale of their values, both S and WD performed almost identically, placing the three automatic segmenters between the upper and lower bounds as expected. For S, statistically significant differences11 (α = 0.05) were found between all segmenters except between APS–human and MinCut–BayesSeg, and WD could only find significant differences between the automatic segmenters and the upper and lower bounds. B, however, shows a marked deviation, and places MinCut and APS statistically significantly below the random baseline with only BayesSeg between the upper and lower bounds—to a significant degree. Why would pairwise mean B act in such an unexpected manner? The answer lies in a further analysis using the confusion matrix proposed earlier to calculate B-precision and B-recall (as shown in Table 2). From the values in Table 2, all three automatic segmenters appear to have Bprecision above the baseline and below the upper bound, but the B-recall of both APS and MinCut is below that of the random baseline (illustrated 11Using Kruskal-Wallis rank sum multiple comparison tests (Siegel and Castellan, 1988, pp. 213-214) for S and WD and the Wilcoxon-Nemenyi-McDonald-Thompson test (Hollander and Wolfe, 1999, p. 295) for B. 1709 B n B-P B-R B-F1 TP FP FN TN Random 0.2640 ± 0.0129 1057 0.3991 0.4673 0.4306 279.0 420 318 4236.0 Human 0.5285 ± 0.0164 841 0.6854 0.7439 0.7135 444.5 204 153 4451.5 BayesSeg 0.3745 ± 0.0146 964 0.5247 0.6224 0.5694 361.0 327 219 4346.0 APS 0.2873 ± 0.0163 738 0.6773 0.3403 0.4530 212.0 101 411 4529.0 MinCut 0.2468 ± 0.0141 871 0.4788 0.3496 0.4041 215.0 234 400 4404.0 Table 2: Mean performance of 5 segmenters using micro-average B, B-precision (B-P), B-recall (B-R), and B-Fβ-measure (B-F1) along with the associated confusion matrix values for 5 segmenters 0.0 0.2 0.4 0.6 0.8 1.0 B −recall 0.0 0.2 0.4 0.6 0.8 1.0 B −precision Random Human BayesSeg APS MinCut Figure 12: Mean B-precision versus B-recall of 5 automatic segmenters in Figure 12). These automatic segmenters were developed and performance tuned using WD, thus it would be expected that they would perform as they did according to WD, but the evaluation using B highlights WD’s bias towards sparse segmentations (i.e., those with low B-recall)—a failing that S also appears to share. Mean B shows an unbiased ranking of these automatic segmenters in terms of the upper and lower bounds. B, then, should be preferred over S and WD for an unbiased segmentation evaluation that assumes that similarity to a human solution is the best measure of performance for a task. 7 Conclusions In this work, a new segmentation evaluation metric, referred to as boundary similarity (B) is proposed as an unbiased metric, along with a boundary-edit-distance-based (BED-based) confusion matrix to compute predictably biased IR metrics such as precision and recall. Additionally, a method of adapting inter-coder agreement coefficients to award partial credit for near misses is proposed that uses B as opposed to S for actual agreement so as to not over-estimate agreement. B overcomes the cosmetically high values of S and, the bias towards segmentations with few or tightly-clustered boundaries of WD–manifesting in this work as a bias towards precision over recall for both WD and S. When such precision is desirable, however, B-precision can be computed from a BED-based confusion matrix, along with other IR metrics. WD and Pk should not be preferred because their biases do not occur consistently in all scenarios, whereas BED-based IR metrics offer expected biases built upon a consistent, edit-based, interpretation of segmentation error. B also allows for an intuitive comparison of boundary pairs between segmentations, as opposed to the window counts of WD or the simplistic edit count normalization of S. When an unbiased segmentation evaluation metric is desired, this work recommends the usage of B and the use of an upper and lower bound to provide context. Otherwise, if the evaluation of a segmentation task requires some biased measure, the predictable bias of IR metrics computed from a BED-based confusion matrix is recommended. For all evaluations, however, a justification for the biased/unbiased metrics used should be given, and more than one metric should be reported so as to allow a reader to ascertain for themselves whether a particular automatic segmenter’s bias in some manner is cause for concern or not. 8 Future Work Future work includes adapting this work to analyse hierarchical segmentations and using it to attempt to explain the low inter-coder agreement coefficients reported in topical segmentation tasks. Acknowledgements I would like to thank Anna Kazantseva for her invaluable feedback and data. Additionally, I would like to thank my thesis committee members—Stan Szpakowicz, James Green, and Xiaodan Zhu—for their feedback along with my supervisor Diana Inkpen and colleague Martin Scaiano. 1710 References Artstein, Ron and Massimo Poesio. 2008. Intercoder agreement for computational linguistics. Computational Linguistics 34(4):555–596. Baker, David. 1990. Stargazers look for life. South Magazine 117:76–77. Beeferman, Doug and Adam Berger. 1999. Statistical models for text segmentation. Machine Learning 34:177–210. Carletta, Jean. 1996. Assessing Agreement on Classification Tasks: The Kappa Statistic. Computational Linguistics 22(2):249–254. Chang, Pi-Chuan, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 224–232. Cohen, Jacob. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement 20:37–46. Coleridge, Samuel Taylor. 1816. Christabel, Kubla Khan, and the Pains of Sleep. John Murray. Collins, Wilkie. 1868. The Moonstone. Tinsley Brothers. Davies, Mark and Joseph L. Fleiss. 1982. Measuring agreement for multinomial data. Biometrics 38:1047–1051. Eisenstein, Jacob. 2009. Hierarchical text segmentation from multi-scale lexical cohesion. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 353–361. Eisenstein, Jacob and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Morristown, NJ, USA, pages 334–343. Fleiss, Joseph L. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin 76:378–382. Fournier, Chris and Diana Inkpen. 2012. Segmentation Similarity and Agreement. In Proceedings of Human Language Technologies: The 2012 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 152– 161. Fournier, Christopher. 2013. Evaluating Text Segmentation. Master’s thesis, University of Ottawa. Franz, Martin, J. Scott McCarley, and Jian-Ming Xu. 2007. User-oriented text segmentation evaluation measure. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery, Stroudsburg, PA, USA, pages 701–702. Gale, William, Kenneth Ward Church, and David Yarowsky. 1992. Estimating upper and lower bounds on the performance of word-sense disambiguation programs. In Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 249–256. Georgescul, Maria, Alexander Clark, and Susan Armstrong. 2006. An analysis of quantitative aspects in the evaluation of thematic segmentation algorithms. In Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 144–151. Haghighi, Aria and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL ’09, pages 362–370. Hearst, Marti A. 1993. TextTiling: A Quantitative Approach to Discourse. Technical report, University of California at Berkeley, Berkeley, CA, USA. Hearst, Marti A. 1997. TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages. Computational Linguistics 23:33–64. Hollander, Myles and Douglas A. Wolfe. 1999. 1711 Nonparametric Statistical Methods. John Wiley & Sons, 2nd edition. Isard, Amy and Jean Carletta. 1995. Replicability of transaction and action coding in the map task corpus. In AAAI Spring Symposium: Empirical Methods in Discourse Interpretation and Generation. pages 60–66. Kazantseva, Anna and Stan Szpakowicz. 2011. Linear Text Segmentation Using Affinity Propagation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Edinburgh, Scotland, UK., pages 284– 293. Kazantseva, Anna and Stan Szpakowicz. 2012. Topical Segmentation: a Study of Human Performance. In Proceedings of Human Language Technologies: The 2012 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 211–220. Lamprier, Sylvain, Tassadit Amghar, Bernard Levrat, and Frederic Saubion. 2007. On evaluation methodologies for text segmentation algorithms. In Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence. IEEE Computer Society, Washington, DC, USA, volume 2, pages 19–26. Litman, Diane J. and Rebecca J. Passonneau. 1995. Combining multiple knowledge sources for discourse segmentation. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 108–115. Malioutov, Igor and Regina Barzilay. 2006. Minimum cut model for spoken lecture segmentation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 25–32. Niekrasz, John and Johanna D. Moore. 2010. Unbiased discourse segmentation evaluation. In Proceedings of the IEEE Spoken Language Technology Workshop, SLT 2010. IEEE 2010, pages 43–48. Oh, Hyo-Jung, Sung Hyon Myaeng, and MyungGil Jang. 2007. Semantic passage segmentation based on sentence topics for question answering. Information Sciences 177(18):3696–3717. Passonneau, Rebecca J. and Diane J. Litman. 1993. Intention-based segmentation: human reliability and correlation with linguistic cues. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 148–155. Pevzner, Lev and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics 28:19–36. Reynar, Jeffrey C. and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the 5th Conference on Applied Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 16–19. Scott, William A. 1955. Reliability of content analysis: The case of nominal scale coding. Public Opinion Quarterly 19:321–325. Siegel, Sidney and N. J. Castellan. 1988. Nonparametric Statistics for the Behavioral Sciences, McGraw-Hill, New York, USA, chapter 9.8. 2nd edition. Sirts, Kairit and Tanel Alum¨ae. 2012. A Hierarchical Dirichlet Process Model for Joint Part-ofSpeech and Morphology Induction. In Proceedings of Human Language Technologies: The 2012 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 407– 416. Stoyanov, Veselin and Claire Cardie. 2008. Topic identification for fine-grained opinion analysis. In Proceedings of the 22nd International Conference on Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 817–824. 1712
2013
167
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1713–1722, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Crowd Prefers the Middle Path: A New IAA Metric for Crowdsourcing Reveals Turker Biases in Query Segmentation Rohan Ramanath∗ R. V. College of Engineering Bangalore, India [email protected] Monojit Choudhury Microsoft Research Lab India Bangalore, India [email protected] Kalika Bali Microsoft Research Lab India Bangalore, India [email protected] Rishiraj Saha Roy† Indian Institute of Technology Kharagpur Kharagpur, India [email protected] Abstract Query segmentation, like text chunking, is the first step towards query understanding. In this study, we explore the effectiveness of crowdsourcing for this task. Through carefully designed control experiments and Inter Annotator Agreement metrics for analysis of experimental data, we show that crowdsourcing may not be a suitable approach for query segmentation because the crowd seems to have a very strong bias towards dividing the query into roughly equal (often only two) parts. Similarly, in the case of hierarchical or nested segmentation, turkers have a strong preference towards balanced binary trees. 1 Introduction Text chunking of Natural Language (NL) sentences is a well studied problem that is an essential preprocessing step for many NLP applications (Abney, 1991; Abney, 1995). In the context of Web search queries, query segmentation is similarly the first step towards analysis and understanding of queries (Hagen et al., 2011). The task in both the cases is to divide the sentence or the query into contiguous segments or chunks of words such that the words from a segment are related to each other more strongly than words from different segments (Bendersky et al., 2009). It is typically assumed that the segments are structurally and semantically coherent and, therefore, the information contained in them can be processed holistically. ∗The work was done during author’s internship at Microsoft Research Lab India. † This author was supported by Microsoft Corporation and Microsoft Research India under the Microsoft Research India PhD Fellowship Award. f Pipe representation Boundary var. 4 apply | first aid course | on line 1 0 0 1 0 3 apply first aid course | on line 0 0 0 1 0 2 apply first aid | course on line 0 0 1 0 0 1 apply | first aid | course | on line 1 0 1 1 0 Table 1: Example of flat segmentation by Turkers. f is the frequency of annotations; segment boundaries are represented by |. f Bracket representation Boundary var. 4 ((apply first) ((aid course) (on line))) 0 2 0 1 0 2 (((apply (first aid)) course) (on line)) 1 0 2 3 0 2 ((apply ((first aid) course)) (on line)) 2 0 1 3 0 1 (apply (((first aid) course) (on line))) 3 0 1 2 0 1 ((apply (first aid)) (course (on line))) 1 0 2 1 0 Table 2: Example of nested segmentation by Turkers. f is the frequency of annotations. A majority of work on query segmentation relies on manually segmented queries by human experts for training and evaluation of segmentation algorithms. These are typically small datasets and even with detailed annotation guidelines and/or close supervision, low Inter Annotator Agreement (IAA) remains an issue. For instance, Table 1 illustrates the variation in flat segmentation by 10 annotators. This confusion is mainly because the definition of a segment in a query is ambiguous and of an unspecified granularity. This is further compounded by the fact that other than easily recognizable and agreed upon segments such as Named Entities or Multi-Word Expressions, there is no established notion of linguistic grouping such as phrases and clauses in a query. Although there is little work on the use of crowdsourcing for query segmentation (Hagen et al., 2011; Hagen et al., 2012), the idea that the 1713 crowd could be a potential (and cheaper) source for reliable segmentation seems a reasonable assumption. The need for larger datasets makes this an attractive proposition. Also, a larger number of annotations could be appropriately distilled to obtain better quality segmentations. In this paper we explore crowdsourcing as an option for query segmentation through experiments designed using Amazon Mechanical Turk (AMT)1. We compare the results against gold datasets created by trained annotators. We address the issues pertaining to disagreements due to both ambiguity and granularity and attempt to objectively quantify their role in IAA. To this end, we also conduct similar annotation experiments for NL sentences and randomly generated queries. While queries are not as structured as NL sentences they are not simply a set of random words. Thus, it is necessary to compare query segmentation to the ¨uber-structure of NL sentences as well as the unter-structure of random n-grams. This has important implications for understanding any inherent biases annotators may have as a result of the apparent lack of structure of the queries. To quantify the effect of granularity on segmentation, we also ask annotators to provide hierarchical or nested segmentations for real and random queries, as well as sentences. Following Abney’s (1992) proposal for hierarchical chunking of NL, we ask the annotators to group exactly two words or segments at a time to recursively form bigger segments. The concept is illustrated in Fig. 1. Table 2 shows annotations from 10 Turkers. It is important to constrain the joining of exactly two segments or words at a time to avoid the issue of fuzziness in granularity. We shall refer to this style of annotation as Nested segmentation, whereas the non-hierarchical nonconstrained chunking will be referred to as Flat segmentation. Through statistical analysis of the experimental data we show that crowdsourcing may not be the best practice for query segmentation, not only because of ambiguity and granularity issues, but because there exist very strong biases amongst annotators to divide a query into two roughly equal parts that result in misleadingly high agreements. As a part of our analysis framework, we introduce a new IAA metric for comparison across flat and nested segmentations. This versatile metric can be 1https://www.mturk.com/mturk/welcome 3 2 1 apply 0 first aid course 0 on line Figure 1: Nested Segmentation: Illustration. readily adapted for measuring IAA for other linguistic annotation tasks, especially when done using crowdsourcing. The rest of the paper is organized as follows. Sec 2 provides a brief overview of related work. Sec 3 describes the experiment design and procedure. In Sec 4, we introduce a new metric for IAA, that could be uniformly applied across flat and nested segmentations. Results of the annotation experiments are reported in Sec 5. In Sec 6, we analyze the possible statistical and linguistic biases in annotation. Sec 7 concludes the paper by summarizing the work and discussing future research directions. All the annotated datasets used in this research are freely available for non-commercial research purposes2. 2 Related Work Query segmentation was introduced by Risvik et. al. (2003) as a possible means to improve Information Retrieval. Since then there has been a significant amount of research exploring various algorithms for this task and its use in IR (see Hagen et. al. (2011) for a survey). Most of the research and evaluation considers query segmentation as a process analogous to identification of phrases within a query which when put within double-quotes (implying exact matching of the quoted phrase in the document) leads to better IR performance. However, this is a very restricted view of the process and does not take into account the full potential of query segmentation. A more generic notion of segments leads to diverse and ambiguous definitions, making its evaluation a hard problem (see Saha Roy et. al. (2012) for a discussion on issues with evaluation). Most automatic segmentation techniques (Bergsma and Wang, 2007; Tan and Peng, 2008; Zhang et al., 2Related datasets and supplementary material can be accessed from http://bit.ly/161Gkk9 or can be obtained by directly emailing the authors. 1714 2009; Brenes et al., 2010; Hagen et al., 2011; Li et al., 2011) have so far been evaluated only against a small set of human-annotated queries (Bergsma and Wang, 2007). The reported low IAA for such datasets casts serious doubts on the reliability of annotation and the performance of the algorithms evaluated on them (Hagen et al., 2011; Saha Roy et al., 2012). To address the problem of data scarcity, Hagen et. al. (2011) have created larger annotated datasets through crowdsourcing3. However, in their approach the crowd is provided with a few (four) possible segmentations of a query to choose from (known through a personal communication with a authors). Thus, it presupposes an automatic process that can generate the correct segmentation of a query within top few options. It is far from obvious how to generate these initial segmentations in a reliable manner. This may also result in an over-optimistic IAA. An ideal segmentation should be based on the annotators’ own interpretation of the query. Nevertheless, if large scale data has to be procured, crowdsourcing seems to be the only efficient and effective model for this task, and has been proven to be so for other IR and linguistic annotations; see Carvalho et al. (2011) for examples of crowdsourcing for IR resources and (Snow et al., 2008; Callison-Burch, 2009) for language resources. In the context of NL text, segmentation has been traditionally referred to as chunking and is a well-studied problem. Abney (1991; 1992; 1995) defines a chunk as a sub-tree within a syntactic phrase structure tree corresponding to Noun, Prepositional, Adjectival, Adverbial and Verb Phrases. Similarly, Bharati et al (1995) defines it as Noun Group and Verb Group based only on local surface information. However, cognitive and annotation experiments for chunking of English (Abney, 1992) and other language text (Bali et al., 2009) have shown that native speakers agree on major clause and phrase boundaries, but may not do so on more fine-grained chunks. One important implication of this is that annotators are expected to agree more on the higher level boundaries for nested segmentation than the lower ones. We note that hierarchical query segmentation was proposed for the first time by Huang et al. (2010), where the authors recursively split a query (or its fragment) into exactly two parts and evaluate the 3http://www.webis.de/research/corpora final output against human annotations. 3 Experiments The annotation experiments have been designed to systematically study the various aspects of query segmentation. In order to verify the effectiveness and reliability of crowdsourcing, we designed an AMT experiment for flat segmentation of Web search queries. As a baseline, we would like to compare these annotations with those from human experts trained for the task. We shall refer to this baseline as the Gold annotation set. Since we believe that the issue of granularity could be the prime reason for previously reported low IAA for segmentation, we also designed AMT-based nested segmentation experiments for the same set of queries, and obtained the corresponding gold annotations. Finally, to estimate the role of ambiguity inherent in the structure of Web search queries on IAA, we conducted two more control experiments, both through crowdsourcing. First, flat and nested segmentation of well-formed English, i.e., NL sentences of similar length distribution; and second, flat and nested segmentation of randomly generated queries. Higher IAA for NL sentences would lead us to conclude that ambiguity and lack of structure in queries is the main reason for low agreements. On the other hand high or comparable IAA for random queries would mean that annotations have strong biases. Thus, we have the following four pairs of annotation experiments: flat and nested segmentation of queries from crowdsourcing, corresponding flat and nested gold annotations, flat and nested segmentation of English sentences from crowdsourcing, and flat and nested segmentations for randomly generated queries through crowdsourcing. 3.1 Dataset For our experiments, we need a set of Web search queries and well-formed English sentences. Furthermore, for generating the random queries, we will use search query logs to learn n-gram models. In particular, we use the following datasets: Q500, QG500: Saha Roy et al. (2012) released a dataset of 500 queries, 5 to 8 words long, for evaluation of various segmentation algorithms. This dataset has flat segmentations from three annotators obtained under controlled experimental settings, and can be considered as Gold annota1715 Figure 2: Length distribution of datasets. tions. Hence, we select this set for our experiments as well. We procured the corresponding nested segmentation for these queries from two human experts, who are regular search engine users, between 20 and 30 years old, and familiar with various linguistic annotation tasks. They annotated the data under supervision. They were trained and paid for the task. We shall refer to the set of flat and nested gold annotations as QG500, whereas Q500 will be reserved for AMT experiments. Q700: Since 500 queries may not be enough for reliable conclusion and since the queries may not have been chosen specifically for the purpose of annotation experiments, we expanded the set with another 700 queries sampled from a slice of the query logs of Bing Australia4 containing 16.7 million queries issued over a period of one month (May 2010). We picked, uniformly at random, queries that are 4 to 8 words long, have only English letters and numerals, and a high click entropy because “a query with a larger click entropy value is more likely to be an informational or ambiguous query” (Dou et al., 2008). Q500 consists of tailish queries with frequency between 5 and 15 that have at least one multiword named entity; but unlike the case of Q700, click-entropy was not considered during sampling. As we shall see, this difference is clearly reflected in the results. S300: We randomly selected 300 English sentences from a collection of full texts of public domain books5 that were 5 to 15 words long, and checked them for well-formedness. This set will be referred to as S300. QRand: Instead of generating search queries by throwing in words randomly, we thought it will be more interesting to explore annotation of 4http://www.bing.com/?cc=au 5http://www.gutenberg.org Parameter Flat Details Nested Details Time needed: actual (allotted) 49 sec (10 min) 1 min 52 sec (15 min) Reward per HIT $0.02 $0.06 Instruction video duration 26 sec 1 min 40 sec Turker qualification Completion rate >100 tasks Turker approval rate Acceptance rate >60 % Turker location United States of America Table 3: Specifics of the HITs for AMT. queries generated using n-gram models for n = 1, 2, 3. We estimated the models from the Bing Australia log of 16.7 million queries. We generated 250 queries each of desired length distribution using the 1, 2 and 3-gram models. We shall refer to these as U250, B250, T250 (for Uni, Bi and Trigram) respectively, and the whole dataset as QRand. Fig. 2 shows the query and sentence length distribution for the various sets. 3.2 Crowdsourcing Experiments We used AMT to get our annotations through crowdsourcing. Pilot experiments were carried out to test the instruction set and examples presented. Based on the feedback, the precise instructions for the final experiments were designed. Two separate AMT Human Intelligence Tasks (HITs) were designed for flat and nested query segmentation. Also, the experiments for queries (Q500+Q700) were conducted separately from S300 and QRand. Thus, we had six HITs in all. The concept of flat and nested segmentation was introduced to the Turkers with the help of examples presented in two short videos6. When in doubt regarding the meaning of a query, the Turkers were advised to issue the query on a search engine of their choice and find out its possible interpretation(s). Note that we intentionally kept definitions of flat and nested segmentation fuzzy because (a) it would require very long instruction manuals to cover all possible cases and (b) Turkers do not tend to read verbose and complex instructions. Table 3 summarizes other specifics of HITs. Honey pots or trap questions whose answers are known a priori are often included in a HIT to identify turkers who are unable to solve the task appropriately leading to incorrect annotations. However, this trick cannot be employed in our case because there is no notion of an absolutely correct segmentation. We observe that even with unambiguous queries, even expert annotators may dis6Flat: http://youtu.be/eMeLjJIvIh0, Nested: http://youtu.be/xE3rwANbFvU 1716 agree on some of the segment boundaries. Hence, we decided to include annotations from all the turkers, except for those that were syntactically illformed (e.g., non-binary nested segmentation). 4 Inter Annotator Agreement Inter Annotator Agreement is the only way to judge the reliability of annotated data in absence of an end application. Therefore, before we can venture into analysis of the experimental data, we need to formalize the notion of IAA for flat and nested queries. The task is non-trivial for two reasons. First, traditional IAA measures are defined for a fixed set of annotators. However, for crowdsourcing based annotations, different annotators might have annotated different parts of the dataset. For instance, we observed that a total of 128 turkers have provided the flat annotations for Q700, when we had only asked for 10 annotations per query. Thus, on average, a turker has annotated only 7.81% of the 700 queries. In fact, we found that 31 turkers had annotated less than 5 queries. Hence, measures such as Cohen’s κ (1960) cannot be directly applied in this context because for crowdsourced annotations, we cannot meaningfully compute annotator-specific distribution of the labels and biases. Second, most of the standard annotation metrics do not generalize for flat segmentation and trees. Artstein and Poesio (2008) provides a comprehensive survey of the IAA metrics and their usage in NLP. They note that all the metrics assume that a fixed set of labels are used for items. Therefore, it is far from obvious how to compare chunking or segmentation that covers the whole text or that might have overlapping units as in the case of nested segmentation. Furthermore, we would like to compare the reliability of flat and nested segmentation, and therefore, ideally we would like to have an IAA metric that can be meaningfully applied to both of these cases. After considering various measures, we decided to appropriately generalize one of the most versatile and effective IAA metrics proposed till date, the Kripendorff’s α (2004). To be consistent with prior work, we will stick to the notation used in Artstein and Poesio (2008) and redefine the α in the context of flat and nested segmentation. Note that though the notations introduced here will be from the perspective of queries, it is equally applicable to sentences and the generalization is straightforward. 4.1 Notations and Definitions Let Q be the set of all queries with cardinality q. A query q ∈Q can be represented as a sequence of |q| words: w1w2 . . . w|q|. We introduce |q−1| random variables, b1, b2, . . . b|q|−1, such that bi represents the boundary between the words wi and wi+1. A flat or nested segmentation of q, represented by qj, j varying from 1 to total number of annotations c, is a particular instantiation of these boundary variables as described below. Definition. A flat segmentation, qj can be uniquely defined by a binary assignment of the boundary variables bj,i, where bj,i = 1 iff wi and wi+1 belong to two different flat segments. Otherwise, bj,i = 0. Thus, q has 2|q|−1 possible flat segmentations. Definition. A nested segmentation qj can also be uniquely defined by assigning non-negative integers to the boundary variables such that bj,i = 0 iff words wi and wi+1 form an atomic segment (i.e., they are grouped together), else bj,i = 1 + max(lefti, righti), where lefti and righti are the heights of the largest subtrees ending at wi and beginning at wi+1 respectively. This numbering scheme for nested segmentation can be understood through Fig. 1. Every internal node of the binary tree corresponding to the nested segmentation is numbered according to its height. The lowest internal nodes, both of whose children are query words, are assigned a value of 0. Other internal nodes get a value of one greater than the height of its higher child. Since every internal node corresponds to a boundary, we assign the height of the node to the corresponding boundaries. The number of unique nested segmentations of a query of length |q| is its corresponding Catalan number7. Boundary variables for flat and nested segmentation are illustrated with an example of each kind in Tables 1 and 2 (last column). 4.2 Krippendorff ’s α for Segmentation Krippendorff ’s α (Krippendorff, 2004) is an extremely versatile agreement coefficient, which is based on the assumption that the expected agreement is calculated by looking at the overall distribution of judgments without regard to which annotator produced them (Artstein and Poesio, 2008). 7http://goo.gl/vKQvK 1717 Hence, it is appropriate for crowdsourced annotation, where the judgments come from a large number of unrelated annotators. Moreover, it allows for different magnitudes of disagreement, which is a useful feature as we might want to differentially penalize disagreements at various levels of the tree for nested segmentation. α is defined as α = 1 −Do De = 1 −s2 within s2 total (1) where Do and De are, respectively, the observed and expected disagreements that are measured by s2 within – the variance within the annotation of an item and s2 total – variance across annotations of all items. We adapt the equations presented in pp.565-566 of Artstein and Poesio (2008) for measuring these quantities for queries: s2 within = 1 2qc(c −1) X q∈Q c X m=1 c X n=1 d(qm, qn) (2) s2 total = 1 2qc(qc −1) X q∈Q c X m=1 X q′∈Q c X n=1 d(qm, q′ n) (3) where, d(qm, q′ n) is a distance metric for the agreement between annotations qm and q′ n. We define two different distance metrics d1 and d2 that are applicable to flat and nested segmentation. We shall first define these metrics for comparing queries with equal length (i.e., |q| = |q′|): d1(qm, q′ n) = 1 |q| −1 |q|−1 X i=1 |bm,i −b′ n,i| (4) d2(qm, q′ n) = 1 |q| −1 |q|−1 X i=1 |b2 m,i −(b′ n,i)2| (5) While d1 penalizes all disagreements equally, d2 penalizes disagreements higher up the tree more. d2 might be a desirable metric for nested segmentation, because research on sentence chunking shows that annotators agree more on clause or major phrase boundaries, even though they may not always agree on intra-clausal or intra-phrasal boundaries (Bali et al., 2009). Note that for flat segmentation, d1 and d2 are identical, and hence we will denote them as d. We propose the following extension to these metrics for queries of unequal lengths. Without loss of generality, let us assume that |q| < |q′|. k is 1 or 2; r = |q′| −|q| + 1. dk(qm, q′ n) = 1 r(|q| −1) r−1 X a=0 |q|−1 X i=1 |bk m,i −(b′ n,i+a)k| (6) 4.3 IAA under Random Bias Assumption Krippendorff’s α uses the cross-item variance as an estimate of chance agreement, which is reliable in general. However, this might result in misleadingly low values of IAA, especially when the items in the set are indeed expected to have similar annotations. To resolve this, we also compute the chance agreement under a random bias model. The random model assumes that all the structural annotations of q are equiprobable. For flat segmentation, it boils down to the fact that all the 2|q|−1 annotations are equally likely, which is equivalent to the assumption that any boundary variable bi has 0.5 probability of being 0 and 0.5 for 1. Analytical computation of the expected probability distributions of d1(qm, qn) and d2(qm, qn) is harder for nested segmentation. Therefore, we programmatically generate all possible trees for q, which is again dependent only on |q| and compute d1 and d2 between all pairs of trees, from which the expected distributions can be readily estimated. Let us denote this expected cumulative probability distribution for flat segmentation as Pd(x; |q|) = the probability that for a pair of randomly chosen flat segmentations of q, qm and qn, d(qm, qn) ≥x. Likewise, let Pd1(x; |q|) and Pd2(x; |q|) be the respective probabilities that for any two nested segmentations qm and qn of q, the following holds: d1(qm, qn) ≥x and d2(qm, qn) ≥x. We define the IAA under random bias model as (k is 1, 2 or null): S = 1 qc2 X q∈Q c X m=1 c X n=1 Pdk(dk(qm, qn); |q|) (7) Thus, S is the expected probability of observing a similar or worse agreement by random chance, averaged over all pairs of annotations for all queries, and not a chance corrected IAA metric such as α. Thus, S = 1 implies that the observed agreement is almost always better than that by random chance and S = 0.5 and 0 respectively imply that the observed agreement is as good as and almost always worse than that by random chance. We 1718 Dataset Flat Nested d1 d1 d2 Q700 0.21(0.59) 0.21(0.89) 0.16(0.68) Q500 0.22(0.62) 0.15(0.70) 0.15(0.44) QG500 0.61(0.88) 0.66(0.88) 0.67(0.80) S300 0.27(0.74) 0.18(0.94) 0.14(0.75) U250 0.23(0.89) 0.42(0.90) 0.30(0.78) B250 0.22(0.86) 0.34(0.88) 0.22(0.71) T250 0.20(0.86) 0.44(0.89) 0.34(0.76) Table 4: Agreement Statistics: α(S). also note that a high value of S and low value of α indicate that though the annotators agree on the judgment of individual items, they also tend to agree on judgments of two different items, which in turn, could be due to strong annotator biases or due to lack of variability of the dataset. In the supplementary material, computations of α and S have been explained in further details through worked out examples. Tables for the expected distributions of d, d1 and d2 under the random annotation assumption are also available. 5 Results Table 4 reports the values of α and S for flat and nested segmentation on the various datasets. For nested segmentation, the values were computed for two different distance metrics d1 and d2. As expected, the highest value of α for both flat and nested segmentation is observed for gold annotations. An α > 0.6 indicates quite good IAA, and thus, reliable annotations. Higher α for nested segmentation QG500 than flat further validates our initial postulate that nested segmentation may reduce disagreement from granularity issues inherent in the definition of flat segmentation. Opposite trends are observed for Q700, Q500 and S300, where α for flat is the highest, followed by that for nested using d1, and then d2. Moreover, except for flat segmentation of sentences, α lies between 0.14 and 0.22, which is quite low. This clearly shows that segmentation, either flat or nested, cannot be reliably procured through crowdsourcing. Lower α for d2 than d1 further indicates that annotators disagree more for higher levels of the trees, contrary to what we had expected. However, nearly equal IAA for sentences and queries implies that low agreement may not be an outcome of inherent ambiguity in the structure of queries. Slightly higher α for flat segmentation and a much higher α for nested segmentation of QRand reinforce the fact that low IAA is not due to a lack of structure in queries. It is interesting to note that α for nested segmentation of S300 and all segmentations of QRand are low or medium despite the fact that S is very high in all these cases. Thus, it is clear that annotators have a strong bias towards certain structures across queries. In the next section, we will analyze some of these biases. We also computed the IAA between QG500 and Q500, and found α = 0.27. This is much lower than α for QG500, though slightly higher than that for Q500. We did not observe any significant variation in agreement with respect to the length of the queries. 6 Biases in Annotation The IAA statistics clearly show that there are certain strong biases in both flat and nested query segmentation, especially those obtained through crowdsourcing. To identify these biases, we went through the annotations and came up with possible hypotheses, which we tried to verify through statistical analysis of the data. Here, we report the most prominent biases that were thus discovered. Bias 1: During flat segmentation, annotators prefer dividing the query into two segments of roughly equal length. As discussed earlier, one of the major problems of flat segmentation is the fuzziness in granularity. In our experiments, we intentionally left the decision of whether to go for fine or coarse-grained segmentation to the annotator. However, it is surprising to observe that annotators typically divide the query into two segments (see Fig. 3, plots A1 and A2), and at times three, but hardly ever more than three. This bias is observed across queries, sentences and random queries, where the percentage of annotations with 2 or 3 segments are greater than 83%, 91% and 96% respectively. This bias is most strongly visible for QRand because the lack of syntactic or semantic cohesion between the words provides no clue for segmentation. Furthermore, we observe that typically segments tend to be of equal length. For this, we computed standard deviations (sd) of segment lengths for all annotations having 2 or 3 segments; the distribution of sd is shown in Fig. 3, plots B1 and B2. We observe that for all datasets, sd lies mainly between 0.5 and 1 (for perspective, consider a query 1719 Figure 3: Analysis of annotation biases: A1, A2 – number of segments per flat segmentation vs. length; B1, B2 – standard deviation of segment length for flat segmentation; C1, C2 – distribution of the tree heights in nested segmentation. Length Expected Q500 QG500 Q700 S300 QRand 5 2.57 2.00 2.02 2.08 2.02 2.01 6 3.24 2.26 2.23 2.23 2.24 2.02 7 3.88 2.70 2.71 2.67 2.55 2.62 8 4.47 2.89 2.68 2.72 2.72 2.35 Table 5: Average height for nested segmentation. with 7 words; with two segments of length 3 and 4 the sd is 0.5, and for 2 and 5, the sd is 1.5), implying that segments are roughly of equal length. It is likely that due to this bias, the S or observed agreement is moderately high for queries and very high for sentences, but then it also leads to high agreement across different queries and sentences (i.e., high s2 total) especially when they are of equal length, which in turn brings down the value of α – the true agreement after bias correction. Bias 2: During nested segmentation, annotators prefer balanced binary trees. Quite analogous to bias 1, for nested segmentation we observe that annotators tend to prefer more balanced binary trees. Fig. 3 plots C1 and C2 show the distribution of the tree heights for various cases and Table 5 reports the corresponding average height of the trees for queries and sentences of various lengths and the the expected value of the height if all trees were equally likely. The observed heights are much lower than the expected values clearly implying the preference of the annotators for more balanced trees. Thus, the crowd seems to choose the middle path, avoiding extremes and hence may not be a reliable source of annotation for query segmentation. It can be argued that similar biases are also observed for gold annotations, and therefore, probably it is the inherent structure of the queries and sentences that lead to such biased distribution of segmentation patterns. However, note that α for QG500 is much higher than all other cases, which shows that the true agreement between gold annotators is immune to such biases or skewed distributions in the datasets. Furthermore, high values of α for QRand despite the very strong biases in annotation shows that there perhaps is very little choice that the annotators have while segmenting randomly generated queries. On the other hand, the textual coherence of the real queries and sentences provide many different choices for segmentation and the Turker typically gets carried away by these biases, leading to low α. Bias 3: Phrase structure drives segmentation only when reconcilable with Bias 1. Whenever the sentence or query has a verb phrase (VP) spanning roughly half of it, annotators seem to chunk before the VP as one would expect, quite as often as just after the verb, which is quite unexpected. For instance, the sentence A gentle sarcasm ruffled her anger. gathers as many as eight flat annotations with a boundary between sarcasm and ruffled, and four with a boundary between ruffled and her. However, if the VP is very short consisting of a single 1720 Position Q500 QG500 Q700 S300 QRand Both 2.24 0.37 2.78 2.08 0.63 None 50.34 56.85 35.74 35.84 39.81 Right 23.86 21.50 19.02 12.52 15.23 Left 18.08 15.97 40.59 45.96 21.21 Table 6: Percentages of positions of segment boundaries with respect to prepositions. Prepositions occurring in the beginning or end of a query/sentence have been excluded from the analysis; hence, numbers in a column do not total 100. verb, as in A fleeting and furtive air of triumph erupted., annotators seem to attempt for a balanced annotation due to Bias 1. As a clear middle boundary is not present in such sentences, the annotations show a lot more variation and disagreement. For instance, only 1 out of 10 annotations had a boundary before erupted in the above example. In fact, at least one annotation had a boundary after each word in the sentence, with no clear majority. Bias 4: Prepositions influence segment boundaries differently for queries and sentences. We automatically labeled all the prepositions in the flat annotations and classified them according to the criterion of whether a boundary was placed immediately before or after it, or on both sides or neither side. The statistics, reported in Table 6, show that for NL sentences a majority of the boundaries are present before the preposition, marking the beginning of a prepositional phrase. However, for queries, a much richer pattern emerges depending on the specific preposition. For instance, to, of and for are often chunked with the previous word (e.g., how to | choose a bike size, birthday party ideas for | one year old). We believe that this difference is because in sentences due to the presence of a verb, the PP has a welldefined head, lack of which leads to preposition in queries getting chunked with words that form more commonly seen patterns (e.g., flights to and tickets for). Bias 3 and 4 present the complex interpretation of the structure of queries by the annotators which could be due to some emerging cognitive model of queries among the search engine users. This is a fascinating and unexplored aspect of query structures that demands deeper investigation through cognitive and psycholinguistic experiments. 7 Conclusion We have studied various aspects of query segmentation through crowdsourcing by designing and conducting suitable experiments. Analysis of experimental data leads us to conclude the following: (a) crowdsoucing may not be a very effective way to collect judgments for query segmentation; (b) addressing fuzziness of granularity for flat segmentation by introducing strict binary nested segments does not lead to better agreement in crowdsourced annotations, though it definitely improves the IAA for gold standard segmentations, implying that low IAA in flat segmentation among experts is primarily an effect of unspecified granularity of segments; (c) low IAA is not due to the inherent structural ambiguity in queries as this holds true for sentences as well; (d) there are strong biases in crowdsourced annotations, mostly because turkers prefer more balanced segment structures; and (e) while annotators are by and large guided by linguistic principles, application of these principles differ between query and NL sentences and also closely interact with other biases. One of the important contributions of this work is the formulation of a new IAA metric for comparing across flat and nested segmentations, especially for crowdsourcing based annotations. Since trees are commonly used across various linguistic annotations, this metric can have wide applicability. The metric, moreover, can be easily adapted to other annotation schemes as well by defining an appropriate distance metric between annotations. Since large scale data for query segmentation is very useful, it would be interesting to see if the problem can be rephrased to the Turkers in a way so as to obtain more reliable judgments. Yet a deeper question is regarding the theoretical status of query structure, which though in an emergent state is definitely an operating model for the annotators. Our future work in this area would specifically target understanding and formalization of the theoretical model underpinning a query. Acknowledgments We thank Ed Cutrell and Andrew Cross, Microsoft Research Lab India, for their help in setting up the AMT experiments. We would also like to thank Anusha Suresh, IIT Kharagpur, India, for helping us with data preparation. 1721 References Steven P. Abney. 1991. Parsing By Chunks. Kluwer Academic Publishers. Steven P. Abney. 1992. Prosodic Structure, Performance Structure And Phrase Structure. In Proceedings 5th DARPA Workshop on Speech and Natural Language, pages 425–428. Morgan Kaufmann. Steven P. Abney. 1995. Chunks and dependencies: Bringing processing evidence to bear on syntax. Computational Linguistics and the Foundations of Linguistic Theory, pages 145–164. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Kalika Bali, Monojit Choudhury, Diptesh Chatterjee, Sankalan Prasad, and Arpit Maheswari. 2009. Correlates between Performance, Prosodic and Phrase Structures in Bangla and Hindi: Insights from a Psycholinguistic Experiment. In Proceedings of International Conference on Natural Language Processing, pages 101 – 110. Michael Bendersky, W. B. Croft, and David A. Smith. 2009. Two-stage query segmentation for information retrieval. In Proceedings of the 32nd international ACM Special Interest Group on Information Retrieval (SIGIR) Conference on Research and Development in Information Retrieval, pages 810–811. ACM. Shane Bergsma and Qin Iris Wang. 2007. Learning Noun Phrase Query Segmentation. In Proceedings of Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 819–826. Akshar Bharati, Vineet Chaitanya, Rajeev Sangal, and KV Ramakrishnamacharyulu. 1995. Natural language processing: a Paninian perspective. PrenticeHall of India New Delhi. David J. Brenes, Daniel Gayo-Avello, and Rodrigo Garcia. 2010. On the fly query segmentation using snippets. In CERI ’10, pages 259–266. Chris Callison-Burch. 2009. Fast, cheap, and creative: evaluating translation quality using amazon’s mechanical turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP ’09, pages 286–295. Association for Computational Linguistics. Vitor R Carvalho, Matthew Lease, and Emine Yilmaz. 2011. Crowdsourcing for search evaluation. ACM Sigir forum, 44(2):17–22. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37–46. Zhicheng Dou, Ruihua Song, Xiaojie Yuan, and JiRong Wen. 2008. Are Click-through Data Adequate for Learning Web Search Rankings? In Proceedings of the 17th ACM Conference on Information and Knowledge Management, pages 73–82. ACM. Matthias Hagen, Martin Potthast, Benno Stein, and Christof Br¨autigam. 2011. Query Segmentation Revisited. In Proceedings of the 20th International Conference on World Wide Web, pages 97– 106. ACM. Matthias Hagen, Martin Potthast, Anna Beyer, and Benno Stein. 2012. Towards Optimum Query Segmentation: In Doubt Without. In Proceedings of the Conference on Information and Knowledge Management, pages 1015–1024. Jian Huang, Jianfeng Gao, Jiangbo Miao, Xiaolong Li, Kuansan Wang, Fritz Behr, and C. Lee Giles. 2010. Exploring web scale language models for search query processing. In Proceedings of the 19th international conference on World wide web, WWW ’10, pages 451–460, New York, NY, USA. ACM. Klaus Krippendorff. 2004. Content Analysis: An Introduction to its Methodology. Sage,Thousand Oaks, CA. Yanen Li, Bo-Jun Paul Hsu, ChengXiang Zhai, and Kuansan Wang. 2011. Unsupervised query segmentation using clickthrough for information retrieval. In SIGIR ’11, pages 285–294. ACM. Knut Magne Risvik, Tomasz Mikolajewski, and Peter Boros. 2003. Query segmentation for web search. In WWW (Posters). Rishiraj Saha Roy, Niloy Ganguly, Monojit Choudhury, and Srivatsan Laxman. 2012. An IR-based Evaluation Framework for Web Search Query Segmentation. In Proceedings of the International ACM Special Interest Group on Information Retrieval (SIGIR) Conference on Research and Development in Information Retrieval, pages 881–890. ACM. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 254–263, Stroudsburg, PA, USA. Association for Computational Linguistics. Bin Tan and Fuchun Peng. 2008. Unsupervised Query Segmentation Using Generative Language Models and Wikipedia. In Proceedings of the 17th International Conference on World Wide Web (WWW), pages 347–356. ACM. Chao Zhang, Nan Sun, Xia Hu, Tingzhu Huang, and Tat-Seng Chua. 2009. Query segmentation based on eigenspace similarity. In Proceedings of the ACLIJCNLP 2009 Conference Short Papers, ACLShort ’09, pages 185–188, Stroudsburg, PA, USA. Association for Computational Linguistics. 1722
2013
168
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1723–1732, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Deceptive Answer Prediction with User Preference Graph Fangtao Li§, Yang Gao†, Shuchang Zhou§‡, Xiance Si§, and Decheng Dai§ §Google Research, Mountain View ‡State Key Laboratory of Computer Architecture, Institute of Computing Technology, CAS {lifangtao,georgezhou,sxc,decheng}@google.com †Department of Computer Science and Technology, Tsinghua University gao [email protected] Abstract In Community question answering (QA) sites, malicious users may provide deceptive answers to promote their products or services. It is important to identify and filter out these deceptive answers. In this paper, we first solve this problem with the traditional supervised learning methods. Two kinds of features, including textual and contextual features, are investigated for this task. We further propose to exploit the user relationships to identify the deceptive answers, based on the hypothesis that similar users will have similar behaviors to post deceptive or authentic answers. To measure the user similarity, we propose a new user preference graph based on the answer preference expressed by users, such as “helpful” voting and “best answer” selection. The user preference graph is incorporated into traditional supervised learning framework with the graph regularization technique. The experiment results demonstrate that the user preference graph can indeed help improve the performance of deceptive answer prediction. 1 Introduction Currently, Community QA sites, such as Yahoo! Answers1 and WikiAnswers2, have become one of the most important information acquisition methods. In addition to the general-purpose web search engines, the Community QA sites have emerged as popular, and often effective, means of information seeking on the web. By posting questions for other participants to answer, users can obtain answers to their specific questions. The Community QA 1http://answers.yahoo.com 2http://wiki.answers.com sites are growing rapidly in popularity. Currently there are hundreds of millions of answers and millions of questions accumulated on the Community QA sites. These resources of past questions and answers are proving to be a valuable knowledge base. From the Community QA sites, users can directly get the answers to meet some specific information need, rather than browse the list of returned documents to find the answers. Hence, in recent years, knowledge mining in Community QA sites has become a popular topic in the field of artificial intelligence (Adamic et al., 2008; Wei et al., 2011). However, some answers may be deceptive. In the Community QA sites, there are millions of users each day. As the answers can guide the user’s behavior, some malicious users are motivated to give deceptive answers to promote their products or services. For example, if someone asks for recommendations about restaurants in the Community QA site, the malicious user may post a deceptive answer to promote the target restaurant. Indeed, because of lucrative financial rewards, in several Community QA sites, some business owners provide incentives for users to post deceptive answers for product promotion. There are at least two major problems that the deceptive answers cause. On the user side, the deceptive answers are misleading to users. If the users rely on the deceptive answers, they will make the wrong decisions. Or even worse, the promoted link may lead to illegitimate products. On the Community QA side, the deceptive answers will hurt the health of the Community QA sites. A Community QA site without control of deceptive answers could only benefit spammers but could not help askers at all. If the asker was cheated by the provided answers, he will not trust and visit this site again. Therefore, it is a fundamental task to predict and filter out the deceptive answers. In this paper, we propose to predict deceptive 1723 answer, which is defined as the answer, whose purpose is not only to answer the question, but also to promote the authors’ self-interest. In the first step, we consider the deceptive answer prediction as a general binary-classification task. We extract two types of features: one is textual features from answer content, including unigram/bigram, URL, phone number, email, and answer length; the other is contextual features from the answer context, including the relevance between answer and the corresponding question, the author of the answer, answer evaluation from other users and duplication with other answers. We further investigate the user relationship for deceptive answer prediction. We assume that similar users tend to have similar behaviors, i.e. posting deceptive answers or posting authentic answers. To measure the user relationship, we propose a new user preference graph, which is constructed based on the answer evaluation expressed by users, such as “helpful” voting and “best answer” selection. The user preference graph is incorporated into traditional supervised learning framework with graph regularization, which can make answers, from users with same preference, tend to have the same category (deceptive or authentic). The experiment results demonstrate that the user preference graph can further help improve the performance for deceptive answer prediction. 2 Related Work In the past few years, it has become a popular task to mine knowledge from the Community QA sites. Various studies, including retrieving the accumulated question-answer pairs to find the related answer for a new question, finding the expert in a specific domain, summarizing single or multiple answers to provide a concise result, are conducted in the Community QA sites (Jeon et al., 2005; Adamic et al., 2008; Liu et al., 2008; Song et al., 2008; Si et al., 2010a; Figueroa and Atkinson, 2011). However, an important issue which has been neglected so far is the detection of deceptive answers. If the acquired question-answer corpus contains many deceptive answers, it would be meaningless to perform further knowledge mining tasks. Therefore, as the first step, we need to predict and filter out the deceptive answers. Among previous work, answer quality prediction (Song et al., 2010; Harper et al., 2008; Shah and Pomerantz, 2010; Ishikawa et al., 2010) is most related to the deceptive answer prediction task. But these are still significant differences between two tasks. Answer quality prediction measures the overall quality of the answers, which refers to the accuracy, readability, completeness of the answer. While the deceptive answer prediction aims to predict if the main purpose of the provided answer is only to answer the specific question, or includes the user’s self-interest to promote something. Some of the previous work (Song et al., 2010; Ishikawa et al., 2010; Bian et al., 2009) views the “best answer” as high quality answers, which are selected by the askers in the Community QA sites. However, the deceptive answer may be selected as high-quality answer by the spammer, or because the general users are mislead. Meanwhile, some answers from non-native speakers may have linguistic errors, which are low-quality answers, but are still authentic answers. Our experiments also show that answer quality prediction is much different from deceptive answer prediction. Previous QA studies also analyze the user graph to investigate the user relationship (Jurczyk and Agichtein, 2007; Liu et al., 2011). They mainly construct the user graph with asker-answerer relationship to estimate the expertise score in Community QA sites. They assume the answerer is more knowledgeable than the asker. However, we don’t care which user is more knowledgeable, but are more likely to know if two users are both spammers or authentic users. In this paper, we propose a novel user preference graph based on their preference towards the target answers. We assume that the spammers may collaboratively promote the target deceptive answers, while the authentic users may generally promote the authentic answers and demote the deceptive answers. The user preference graph is constructed based on their answer evaluation, such as “helpful” voting or “best answer” selection. 3 Proposed Features We first view the deceptive answer prediction as a binary-classification problem. Two kinds of features, including textual features and contextual features, are described as follows: 3.1 Textual Features We first aim to predict the deceptive answer by analyzing the answer content. Several textual features are extracted from the answer content: 3.1.1 Unigrams and Bigrams The most common type of feature for text classification is the bag-of-word. We use an effective 1724 feature selection method χ2 (Yang and Pedersen, 1997) to select the top 200 unigrams and bigrams as features. The top ten unigrams related to deceptive answers are shown on Table 1. We can see that these words are related to the intent for promotion. professional service advice address site telephone therapy recommend hospital expert Table 1: Top 10 Deceptive Related Unigrams 3.1.2 URL Features Some malicious users may promote their products by linking a URL. We find that URL is good indicator for deceptive answers. However, some URLs may provide the references for the authentic answers. For example, if you ask the weather in mountain view, someone may just post the link to ”http://www.weather.com/”. Therefore, besides the existence of URL, we also use the following URL features: 1). Length of the URLs: we observe that the longer urls are more likely to be spam. 2). PageRank Score: We employ the PageRank (Page et al., 1999) score of each URL as popularity score. 3.1.3 Phone Numbers and Emails There are a lot of contact information mentioned in the Community QA sites, such as phone numbers and email addresses, which are very likely to be deceptive, as good answers are found to be less likely to refer to phone numbers or email addresses than the malicious ones. We extract the number of occurrences of email and phone numbers as features. 3.1.4 Length We have also observed some interesting patterns about the length of answer. Deceptive ones tend to be longer than authentic ones. This can be explained as the deceptive answers may be well prepared to promote the target. We also employ the number of words and sentences in the answer as features. 3.2 Contextual Features Besides the answer textual features, we further investigate various features from the context of the target answer: 3.2.1 Question Answer Relevance The main characteristic of answer in Community QA site is that the answer is provided to answer the corresponding question. We can use the corresponding question as one of the context features by measuring the relevance between the answer and the question. We employ three different models for Question-Answer relevance: Vector Space Model Each answer or question is viewed as a word vector. Given a question q and the answer a, our vector model uses weighted word counts(e.g.TFIDF) as well as the cosine similarity (q · a) of their word vectors as relevant function (Salton and McGill, 1986). However, vector model only consider the exact word match, which is a big problem, especially when the question and answer are generally short compared to the document. For example, Barack Obama and the president of the US are the same person. But the vector model would indicate them to be different. To remedy the wordmismatch problem, we also look for the relevance models in higher semantic levels. Translation Model A translation model is a mathematical model in which the language translation is modeled in a statistical way. The probability of translating a source sentence (as answer here) into target sentence (as question here) is obtained by aligning the words to maximize the product of all the word probabilities. We train a translation model (Brown et al., 1990; Och and Ney, 2003) using the Community QA data, with the question as the target language, and the corresponding best answer as the source language. With translation model, we can compute the translation score for new question and answer. Topic Model To reduce the false negatives of word mismatch in vector model, we also use the topic models to extend matching to semantic topic level. The topic model, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003), considers a collection of documents with K latent topics, where K is much smaller than the number of words. In essence, LDA maps information from the word dimension to a semantic topic dimension, to address the shortcomings of the vector model. 3.2.2 User Profile Features We extract several user’s activity statistics to construct the user profile features, including the level 1725 of the user in the Community QA site, the number of questions asked by this user, the number of answers provided by this user, and the best answer ratio of this user. 3.2.3 User Authority Score Motivated by expert finding task (Jurczyk and Agichtein, 2007; Si et al., 2010a; Li et al., 2011), the second type of author related feature is authority score, which denotes the expertise score of this user. To compute the authority score, we first construct a directed user graph with the user interactions in the community. The nodes of the graph represent users. An edge between two users indicates a contribution from one user to the other. Specifically, on a Q&A site, an edge from A to B is established when user B answered a question asked by A, which shows user B is more likely to be an expert than A. The weight of an edge indicates the number of interactions. We compute the user’s authority score (AS) based on the link analysis algorithm PageRank: AS(ui) = 1 −d N + d X uj∈M(ui) AS(uj) L(uj) (1) where u1, . . . , uN are the users in the collection, N is the total number of users, M(ui) is the set of users whose answers are provided by user ui, L(ui) is the number of users who answer ui’s questions, d is a damping factor, which is set as 0.85. The authority score can be computed iteratively with random initial values. 3.2.4 Robot Features The third type of author related feature is used for detecting whether the author is a robot, which are scripts crafted by malicious users to automatically post answers. We observe that the distributions of the answer-posting time are very different between general user and robot. For example, some robots may make posts continuously and mechanically, hence the time increment may be smaller that human users who would need time to think and process between two posts. Based on this observation, we design an time sequence feature for robot detection. For each author, we can get a list of time points to post answers, T = {t0, t1, ..., tn}, where ti is the time point when posting the ith answer. We first convert the time sequence T to time interval sequence ∆T = {∆t0, ∆t1, ..., ∆tn−1}, where ∆ti = ti+1 −ti. Based on the interval sequences for all users, we then construct a matrix Xm×b whose rows correspond to users and columns correspond to interval histogram with predefined range. We can use each row vector as time sequence pattern to detect robot. To reduce the noise and sparse problem, we use the dimension reduction techniques to extract the latent semantic features with Singular Value Decomposition (SVD) (Deerwester et al., 1990; Kim et al., 2006). 3.2.5 Evaluation from Other Users In the Community QA sites, other users can express their opinions or evaluations on the answer. For example, the asker can choose one of the answers as best answer. We use a bool feature to denote if this answer is selected as the best answer. In addition, other users can label each answer as “helpful” or “not helpful”. We also use this helpful evaluation by other users as the contextual feature, which is defined as the ratio between the number of “helpful” votes and the number of total votes. 3.2.6 Duplication with Other Answers The malicious user may post the pre-written product promotion documents to many answers, or just change the product name. We also compute the similarity between different answers. If the two answers are totally same, but the question is different, these answer is potentially as a deceptive answer. Here, we don’t want to measure the semantic similarity between two answers, but just measure if two answers are similar to the word level, therefore, we apply BleuScore (Papineni et al., 2002), which is a standard metric in machine translation for measuring the overlap between n-grams of two text fragments r and c. The duplication score of each answer is the maximum BleuScore compared to all other answers. 4 Deceptive Answer Prediction with User Preference Graph Besides the textual and contextual features, we also investigate the user relationship for deceptive answer prediction. We assume that similar users tend to perform similar behaviors (posting deceptive answers or posting authentic answers). In this section, we first show how to compute the user similarity (user preference graph construction), and then introduce how to employ the user relationship for deceptive answer prediction. 4.1 User Preference Graph Construction In this section, we propose a new user graph to describe the relationship among users. Figure 1 (a) shows the general process in a question answering 1726 Question Answer1 Answer2 Best Answer u1 u2 u3 u4 u5 u6 (a) Question Answering (b) User Preference Relation (c) User Preference Graph Figure 1: User Preference Graph Construction thread. The asker, i.e. u1, asks a question. Then, there will be several answers to answer this question from other users, for example, answerers u2 and u3. After the answers are provides, users can also vote each answer as “helpful” or “not helpful” to show their evaluation towards the answer . For example, users u4, u5 vote the first answer as “not helpful”, and user u6 votes the second answer as “helpful”. Finally, the asker will select one answer as the best answer among all answers. For example, the asker u1 selects the first answer as the “best answer”. To mine the relationship among users, previous studies mainly focus on the asker-answerer relationship (Jurczyk and Agichtein, 2007; Liu et al., 2011). They assume the answerer is more knowledgeable than the asker. Based on this assumption, they can extract the expert in the community, as discussed in Section 3.2.3. However, we don’t care which user is more knowledgeable, but are more interested in whether two users are both malicious users or authentic users. Here, we propose a new user graph based on the user preference. The preference is defined based on the answer evaluation. If two users show same preference towards the target answer, they will have the user-preference relationship. We mainly use two kinds of information: “helpful” evaluation and “best answer” selection. If two users give same “helpful” or “not helpful” to the target answer, we view these two users have same user preference. For example, user u4 and user u5 both give “not helpful” evaluation towards the first answer, we can say that they have same user preference. Besides the real “helpful” evaluation, we also assume the author of the answer gives the “helpful” evaluation to his or her own answer. Then if user u6 give “helpful” evaluation to the second answer, we will view user u6 has same preference as user u3, who is the author of the second answer. We also can extract the user preference with “best answer” selection. If the asker selects the “best answer” among all answers, we will view that the asker has same preference as the author of the “best answer”. For example, we will view user u1 and user u2 have same preference. Based on the two above assumptions, we can extract three user preference relationships (with same preference) from the question answering example in Figure 1 (a): u4 ∼u5, u3 ∼u6, u1 ∼u2, as shown in Figure1 (b). After extracting all user preference relationships, we can construct the user preference graph as shown in Figure 1 (c). Each node represents a user. If two users have the user preference relationship, there will be an edge between them. The edge weight is the number of user preference relationships. In the Community QA sites, the spammers mainly promote their target products by promoting the deceptive answers. The spammers can collaboratively make the deceptive answers look good, by voting them as high-quality answer, or selecting them as “best answer”. However, the authentic users generally have their own judgements to the good and bad answers. Therefore, the evaluation towards the answer reflects the relationship among users. Although there maybe noisy relationship, for example, an authentic user may be cheated, and selects the deceptive answer as “best answer”, we hope the overall user preference relation can perform better results than previous user interaction graph for this task. 1727 4.2 Incorporating User Preference Graph To use the user graph, we can just compute the feature value from the graph, and add it into the supervised method as the features introduced in Section 3. Here, we propose a new technique to employ the user preference graph. We utilize the graph regularizer (Zhang et al., 2006; Lu et al., 2010) to constrain the supervised parameter learning. We will introduce this technique based on a commonly used model f(·), the linear weight model, where the function value is determined by linear combination of the input features: f(xi) = wT · xi = X k wk · xik (2) where xi is a K dimension feature vector for the ith answer, the parameter value wk captures the effect of the kth feature in predicting the deceptive answer. The best parameters w∗can be found by minimizing the following objective function: Ω1(w) = X i L(wT xi, yi) + α · |w|2 F (3) where L(wT xi, yi) is a loss function that measures discrepancy between the predicted label wT · xi and the true label yi, where yi ∈ {+1, −1}. The common used loss functions include L(p, y) = (p−y)2 (least square), L(p, y) = ln (1 + exp (−py)) (logistic regression). For simplicity, here we use the least square loss function. |w|2 F = P k w2 k is a regularization term defined in terms of the Frobenius norm of the parameter vector w and plays the role of penalizing overly complex models in order to avoid fitting. We want to incorporate the user preference relationship into the supervised learning framework. The hypothesis is that similar users tend to have similar behaviors, i.e. posting deceptive answers or authentic answers. Here, we employ the user preference graph to denote the user relationship. Based on this intuition, we propose to incorporate the user graph into the linear weight model with graph regularization. The new objective function is changed as: Ω2(w) = X i L(wT xi, yi) + α · |w|2 F + β X ui,uj∈Nu X x∈Aui,y∈Auj wui,uj(f(x) −f(y))2 (4) where Nu is the set of neighboring user pairs in user preference graph, i.e, the user pairs with same preference. Aui is the set of all answers posted by user ui. wui,uj is the weight of edge between ui and uj in user preference graph. In the above objective function, we impose a user graph regularization term β X ui,uj∈Nu X x∈Aui,y∈Auj wui,uj(f(x) −f(y))2 to minimize the answer authenticity difference among users with same preference. This regularization term smoothes the labels on the graph structure, where adjacent users with same preference tend to post answers with same label. 5 Experiments 5.1 Experiment Setting 5.1.1 Dataset Construction In this paper, we employ the Confucius (Si et al., 2010b) data to construct the deceptive answer dataset. Confucius is a community question answering site, developed by Google. We first crawled about 10 million question threads within a time range. Among these data, we further sample a small data set, and ask three trained annotators to manually label the answer as deceptive or not. If two or more people annotate the answer as deceptive, we will extract this answer as a deceptive answer. In total, 12446 answers are marked as deceptive answers. Similarly, we also manually annotate 12446 authentic answers. Finally, we get 24892 answers with deceptive and authentic labels as our dataset. With our labeled data, we employ supervised methods to predict deceptive answers. We conduct 5-fold cross-validation for experiments. The larger question threads data is employed for feature learning, such as translation model, and topic model training. 5.1.2 Evaluation Metrics The evaluation metrics are precision, recall and F-score for authentic answer category and deceptive answer category: precision = Sp∩Sc Sp , recall = Sp∩Sc Sc , and F = 2∗precision∗recall precision+recall , where Sc is the set of gold-standard positive instances for the target category, Sp is the set of predicted results. We also use the accuracy as one metric, which is computed as the number of answers predicted correctly, divided by the number of total answers. 1728 Deceptive Answer Authentic Answer Overall Prec. Rec. F-Score Prec. Rec. F-Score Acc. Random 0.50 0.50 0.50 0.50 0.50 0.50 0.50 Unigram/Bigram (UB) 0.61 0.71 0.66 0.66 0.55 0.60 0.63 URL 0.93 0.26 0.40 0.57 0.98 0.72 0.62 Phone/Mail 0.94 0.15 0.25 0.53 0.99 0.70 0.57 Length 0.56 0.91 0.69 0.76 0.28 0.41 0.60 All Textual Features 0.64 0.67 0.66 0.66 0.63 0.64 0.65 QA Relevance 0.66 0.57 0.61 0.62 0.71 0.66 0.64 User Profile 0.62 0.53 0.57 0.59 0.67 0.63 0.60 User Authority 0.54 0.80 0.65 0.62 0.33 0.43 0.56 Robot 0.66 0.62 0.64 0.61 0.66 0.64 0.64 Answer Evaluation 0.55 0.53 0.54 0.55 0.57 0.56 0.55 Answer Duplication 0.69 0.71 0.70 0.70 0.68 0.69 0.69 All Contextual Feature 0.78 0.74 0.76 0.75 0.79 0.77 0.77 Textutal + Contextual 0.80 0.82 0.81 0.82 0.79 0.80 0.81 Table 2: Results With Textual and Contextual Features 5.2 Results with Textual and Contextual Features We tried several different classifiers, including SVM, ME and the linear weight models with least square and logistic regression. We find that they can achieve similar results. For simplicity, the linear weight with least square is employed in our experiment. Table 2 shows the experiment results. For textual features, it achieves much better result with unigram/bigram features than the random guess. This is very different from the answer quality prediction task. The previous studies (Jeon et al., 2006; Song et al., 2010) find that the word features can’t improve the performance on answer quality prediction. However, from Table 1, we can see that the word features can provide some weak signals for deceptive answer prediction, for example, words “recommend”, “address”, “professional” express some kinds of promotion intent. Besides unigram and bigram, the most effective textual feature is URL. The phone and email features perform similar results with URL. The observation of length feature for deceptive answer prediction is very different from previous answer quality prediction. For answer quality prediction, length is an effective feature, for example, long-length provides very strong signals for high-quality answer (Shah and Pomerantz, 2010; Song et al., 2010). However, for deceptive answer prediction, we find that the long answers are more potential to be deceptive. This is because most of deceptive answers are well prepared for product promotion. They will write detailed answers to attract user’s attention and promote their products. Finally, with all textual features, the experiment achieves the best result, 0.65 in accuracy. For contextual features, we can see that, the most effective contextual feature is answer duplication. The malicious users may copy the prepared deceptive answers or just simply edit the target name to answer different questions. Questionanswer relevance and robot are the second most useful single features for deceptive answer prediction. The main characteristics of the Community QA sites is to accumulate the answers for the target questions. Therefore, all the answers should be relevant to the question. If the answer is not relevant to the corresponding question, this answer is more likely to be deceptive. Robot is one of main sources for deceptive answers. It automatically post the deceptive answers to target questions. Here, we formulate the time series as interval sequence. The experiment result shows that the robot indeed has his own posting behavior patterns. The user profile feature also can contribute a lot to deceptive answer prediction. Among the user profile features, the user level in the Community QA site is a good indicator. The other two contextual features, including user authority and answer evaluation, provide limited improvement. We find the following reasons: First, some malicious users post answers to various questions for product promotion, but don’t ask any question. From Equation 1, when iteratively computing the 1729 Deceptive Answer Authentic Answer Overall Prec. Rec. F-Score Prec. Rec. F-Score Acc. Interaction Graph as Feature 0.80 0.82 0.81 0.82 0.79 0.80 0.81 Interaction Graph as Regularizer 0.80 0.83 0.82 0.82 0.80 0.81 0.82 Preference Graph as Feature 0.79 0.83 0.81 0.82 0.78 0.80 0.81 Preference Graph as Regularizer 0.83 0.86 0.85 0.85 0.83 0.84 0.85 Table 3: Results With User Preference Graph final scores, the authority scores for these malicious users will be accumulated to large values. Therefore, it is hard to distinguish whether the high authority score represents real expert or malicious user. Second, the “best answer” is not a good signal for deceptive answer prediction. This may be selected by malicious users, or the authentic asker was misled, and chose the deceptive answer as “best answer”. This also demonstrates that the deceptive answer prediction is very different from the answer quality prediction. When combining all the contextual features, it can achieve the overall accuracy 0.77, which is much better than the textual features. Finally, with all the textual and contextual features, we achieve the overall result, 0.81 in accuracy. 5.3 Results with User Preference Graph Table 3 shows the results with user preference graph. We compare with several baselines. Interaction graph is constructed by the asker-answerer relationship introduced in Section 3.2.3. When using the user graph as feature, we compute the authority score for each user with PageRank as shown in Equation 1. We also incorporating the interaction graph with a regularizer as shown in Equation 4. Note that we didn’t consider the edge direction when using interaction graph as a regularizer. From the table, we can see that when incorporating user preference graph as a feature, it can’t achieve a better result than the interaction graph. The reason is similar as the interaction graph. The higher authority score may boosted by other spammer, and can’t be a good indicator to distinguish deceptive and authentic answers. When we incorporate the user preference graph as a regularizer, it can achieve about 4% further improvement, which demonstrates that the user evaluation towards answers, such as “helpful” voting and “best answer” selection, is a good signal to generate user relationship for deceptive answer prediction, and the graph regularization is an effective technique to incorporate the user preference graph. We also analyze the parameter sen10 −5 10 −4 10 −3 10 −2 10 −1 10 0 0.76 0.78 0.8 0.82 0.84 0.86 0.88 Accuracy General supervised method Interaction Graph as Regularizer Preference Graph as Regularizer Figure 2: Results with different values of β sitivity. β is the tradeoff weight for graph regularization term. Figure 2 shows the results with different values of β. We can see that when β ranges from 10−4 ∼10−2, the deceptive answer prediction can achieve best results. 6 Conclusions and Future Work In this paper, we discuss the deceptive answer prediction task in Community QA sites. With the manually labeled data set, we first predict the deceptive answers with traditional classification method. Two types of features, including textual features and contextual features, are extracted and analyzed. We also introduce a new user preference graph, constructed based on the user evaluations towards the target answer, such as “helpful” voting and “best answer” selection. A graph regularization method is proposed to incorporate the user preference graph for deceptive answer prediction. The experiments are conducted to discuss the effects of different features. The experiment results also show that the method with user preference graph can achieve more accurate results for deceptive answer prediction. In the future work, it is interesting to incorporate more features into deceptive answer prediction. It is also important to predict the deceptive question threads, which are posted and answered both by malicious users for product promotion. Malicious user group detection is also an important task in the future. 1730 References Lada A. Adamic, Jun Zhang, Eytan Bakshy, and Mark S. Ackerman. 2008. Knowledge sharing and yahoo answers: everyone knows something. In Proceedings of the 17th international conference on World Wide Web, WWW ’08, pages 665–674, New York, NY, USA. ACM. Jiang Bian, Yandong Liu, Ding Zhou, Eugene Agichtein, and Hongyuan Zha. 2009. Learning to recognize reliable users and content in social media with coupled mutual reinforcement. In Proceedings of the 18th international conference on World wide web, WWW ’09, pages 51–60, NY, USA. ACM. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Comput. Linguist., 16:79–85, June. S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer, and R. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391–407. A. Figueroa and J. Atkinson. 2011. Maximum entropy context models for ranking biographical answers to open-domain definition questions. In Twenty-Fifth AAAI Conference on Artificial Intelligence. F. Maxwell Harper, Daphne Raban, Sheizaf Rafaeli, and Joseph A. Konstan. 2008. Predictors of answer quality in online q&a sites. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, CHI ’08, pages 865– 874, New York, NY, USA. ACM. Daisuke Ishikawa, Tetsuya Sakai, and Noriko Kando, 2010. Overview of the NTCIR-8 Community QA Pilot Task (Part I): The Test Collection and the Task, pages 421–432. Number Part I. Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of the 14th ACM CIKM conference, 05, pages 84–90, NY, USA. ACM. J. Jeon, W.B. Croft, J.H. Lee, and S. Park. 2006. A framework to predict the quality of answers with non-textual features. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 228–235. ACM. P. Jurczyk and E. Agichtein. 2007. Discovering authorities in question answer communities by using link analysis. In Proceedings of the sixteenth ACM CIKM conference, pages 919–922. ACM. H. Kim, P. Howland, and H. Park. 2006. Dimension reduction in text classification with support vector machines. Journal of Machine Learning Research, 6(1):37. Fangtao Li, Minlie Huang, Yi Yang, and Xiaoyan Zhu. 2011. Learning to identify review spam. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume Three, pages 2488–2493. AAAI Press. Yuanjie Liu, Shasha Li, Yunbo Cao, Chin-Yew Lin, Dingyi Han, and Yong Yu. 2008. Understanding and summarizing answers in community-based question answering services. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08, pages 497– 504, Stroudsburg, PA, USA. Association for Computational Linguistics. Jing Liu, Young-In Song, and Chin-Yew Lin. 2011. Competition-based user expertise score estimation. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 425–434. ACM. Yue Lu, Panayiotis Tsaparas, Alexandros Ntoulas, and Livia Polanyi. 2010. Exploiting social context for review quality prediction. In Proceedings of the 19th international conference on World wide web, pages 691–700. ACM. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist., 29:19–51, March. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical Report 1999-66, Stanford InfoLab, November. SIDL-WP1999-0120. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. ACL. Gerard Salton and Michael J. McGill. 1986. Introduction to Modern Information Retrieval. McGrawHill, Inc., New York, NY, USA. Chirag Shah and Jefferey Pomerantz. 2010. Evaluating and predicting answer quality in community qa. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’10, pages 411–418, New York, NY, USA. ACM. X. Si, Z. Gyongyi, and E. Y. Chang. 2010a. Scalable mining of topic-dependent user reputation for improving user generated content search quality. In Google Technical Report. 1731 Xiance Si, Edward Y. Chang, Zolt´an Gy¨ongyi, and Maosong Sun. 2010b. Confucius and its intelligent disciples: integrating social with search. Proc. VLDB Endow., 3:1505–1516, September. Young-In Song, Chin-Yew Lin, Yunbo Cao, and HaeChang Rim. 2008. Question utility: a novel static ranking of question search. In Proceedings of the 23rd national conference on Artificial intelligence - Volume 2, AAAI’08, pages 1231–1236. AAAI Press. Y.I. Song, J. Liu, T. Sakai, X.J. Wang, G. Feng, Y. Cao, H. Suzuki, and C.Y. Lin. 2010. Microsoft research asia with redmond at the ntcir-8 community qa pilot task. In Proceedings of NTCIR. Wei Wei, Gao Cong, Xiaoli Li, See-Kiong Ng, and Guohui Li. 2011. Integrating community question and answer archives. In AAAI. Y. Yang and J.O. Pedersen. 1997. A comparative study on feature selection in text categorization. In MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, pages 412– 420. MORGAN KAUFMANN PUBLISHERS. Tong Zhang, Alexandrin Popescul, and Byron Dom. 2006. Linear prediction models with graph regularization for web-page categorization. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 821–826. ACM. 1732
2013
169
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 166–175, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Word Alignment Modeling with Context Dependent Deep Neural Network Nan Yang1, Shujie Liu2, Mu Li2, Ming Zhou2, Nenghai Yu1 1University of Science and Technology of China, Hefei, China 2Microsoft Research Asia, Beijing, China {v-nayang,shujliu,muli,mingzhou}@microsoft.com [email protected] Abstract In this paper, we explore a novel bilingual word alignment approach based on DNN (Deep Neural Network), which has been proven to be very effective in various machine learning tasks (Collobert et al., 2011). We describe in detail how we adapt and extend the CD-DNNHMM (Dahl et al., 2012) method introduced in speech recognition to the HMMbased word alignment model, in which bilingual word embedding is discriminatively learnt to capture lexical translation information, and surrounding words are leveraged to model context information in bilingual sentences. While being capable to model the rich bilingual correspondence, our method generates a very compact model with much fewer parameters. Experiments on a large scale EnglishChinese word alignment task show that the proposed method outperforms the HMM and IBM model 4 baselines by 2 points in F-score. 1 Introduction Recent years research communities have seen a strong resurgent interest in modeling with deep (multi-layer) neural networks. This trending topic, usually referred under the name Deep Learning, is started by ground-breaking papers such as (Hinton et al., 2006), in which innovative training procedures of deep structures are proposed. Unlike shallow learning methods, such as Support Vector Machine, Conditional Random Fields, and Maximum Entropy, which need hand-craft features as input, DNN can learn suitable features (representations) automatically with raw input data, given a training objective. DNN did not achieve expected success until 2006, when researchers discovered a proper way to intialize and train the deep architectures, which contains two phases: layer-wise unsupervised pretraining and supervised fine tuning. For pretraining, Restricted Boltzmann Machine (RBM) (Hinton et al., 2006), auto-encoding (Bengio et al., 2007) and sparse coding (Lee et al., 2007) are proposed and popularly used. The unsupervised pretraining trains the network one layer at a time, and helps to guide the parameters of the layer towards better regions in parameter space (Bengio, 2009). Followed by fine tuning in this region, DNN is shown to be able to achieve state-of-the-art performance in various area, or even better (Dahl et al., 2012) (Kavukcuoglu et al., 2010). DNN also achieved breakthrough results on the ImageNet dataset for objective recognition (Krizhevsky et al., 2012). For speech recognition, (Dahl et al., 2012) proposed context-dependent neural network with large vocabulary, which achieved 16.0% relative error reduction. DNN has also been applied in Natural Language Processing (NLP) field. Most works convert atomic lexical entries into a dense, low dimensional, real-valued representation, called word embedding; Each dimension represents a latent aspect of a word, capturing its semantic and syntactic properties (Bengio et al., 2006). Word embedding is usually first learned from huge amount of monolingual texts, and then fine-tuned with taskspecific objectives. (Collobert et al., 2011) and (Socher et al., 2011) further apply Recursive Neural Networks to address the structural prediction tasks such as tagging and parsing, and (Socher et al., 2012) explores the compositional aspect of word representations. Inspired by successful previous works, we propose a new DNN-based word alignment method, which exploits contextual and semantic similarities between words. As shown in example (a) of Figure 1, in word pair {“juda” ⇒“mammoth”}, the Chinese word “juda” is a common word, but 166 mammoth will be a jiang shi yixiang juda gongcheng job (a) ሶ ᱟ а亩 ᐘབྷ ᐕ〻 A : farmer Yibula said nongmin yibula shuo : “ “ (b) ߌ≁ Ժᐳ᣹ 䈤 Figure 1: Two examples of word alignment the English word “mammoth” is not, so it is very hard to align them correctly. If we know that “mammoth” has the similar meaning with “big”, or “huge”, it would be easier to find the corresponding word in the Chinese sentence. As we mentioned in the last paragraph, word embedding (trained with huge monolingual texts) has the ability to map a word into a vector space, in which, similar words are near each other. For example (b) in Figure 1, for the word pair {“yibula” ⇒“Yibula”}, both the Chinese word “yibula” and English word “Yibula” are rare name entities, but the words around them are very common, which are {“nongmin”, “shuo”} for Chinese side and {“farmer”, “said”} for the English side. The pattern of the context {“nongmin X shuo” ⇒“farmer X said”} may help to align the word pair which fill the variable X, and also, the pattern {“yixiang X gongcheng” ⇒“a X job”} is helpful to align the word pair {“juda” ⇒“mammoth”} for example (a). Based on the above analysis, in this paper, both the words in the source and target sides are firstly mapped to a vector via a discriminatively trained word embeddings, and word pairs are scored by a multi-layer neural network which takes rich contexts (surrounding words on both source and target sides) into consideration; and a HMM-like distortion model is applied on top of the neural network to characterize structural aspect of bilingual sentences. In the rest of this paper, related work about DNN and word alignment are first reviewed in Section 2, followed by a brief introduction of DNN in Section 3. We then introduce the details of leveraging DNN for word alignment, including the details of our network structure in Section 4 and the training method in Section 5. The merits of our approach are illustrated with the experiments described in Section 6, and we conclude our paper in Section 7. 2 Related Work DNN with unsupervised pre-training was firstly introduced by (Hinton et al., 2006) for MNIST digit image classification problem, in which, RBM was introduced as the layer-wise pre-trainer. The layer-wise pre-training phase found a better local maximum for the multi-layer network, thus led to improved performance. (Krizhevsky et al., 2012) proposed to apply DNN to do object recognition task (ImageNet dataset), which brought down the state-of-the-art error rate from 26.1% to 15.3%. (Seide et al., 2011) and (Dahl et al., 2012) apply Context-Dependent Deep Neural Network with HMM (CD-DNN-HMM) to speech recognition task, which significantly outperforms traditional models. Most methods using DNN in NLP start with a word embedding phase, which maps words into a fixed length, real valued vectors. (Bengio et al., 2006) proposed to use multi-layer neural network for language modeling task. (Collobert et al., 2011) applied DNN on several NLP tasks, such as part-of-speech tagging, chunking, name entity recognition, semantic labeling and syntactic parsing, where they got similar or even better results than the state-of-the-art on these tasks. (Niehues and Waibel, 2012) shows that machine translation results can be improved by combining neural language model with n-gram traditional language. (Son et al., 2012) improves translation quality of n-gram translation model by using a bilingual neural language model. (Titov et al., 2012) learns a context-free cross-lingual word embeddings to facilitate cross-lingual information retrieval. For the related works of word alignment, the most popular methods are based on generative models such as IBM Models (Brown et al., 1993) and HMM (Vogel et al., 1996). Discriminative approaches are also proposed to use hand crafted features to improve word alignment. Among them, (Liu et al., 2010) proposed to use phrase and rule pairs to model the context information in a loglinear framework. Unlike previous discriminative methods, in this work, we do not resort to any hand crafted features, but use DNN to induce “features” from raw words. 167 3 DNN structures for NLP The most important and prevalent features available in NLP are the words themselves. To apply DNN to NLP task, the first step is to transform a discrete word into its word embedding, a low dimensional, dense, real-valued vector (Bengio et al., 2006). Word embeddings often implicitly encode syntactic or semantic knowledge of the words. Assuming a finite sized vocabulary V , word embeddings form a (L×|V |)-dimension embedding matrix WV , where L is a pre-determined embedding length; mapping words to embeddings is done by simply looking up their respective columns in the embedding matrix WV . The lookup process is called a lookup layer LT , which is usually the first layer after the input layer in neural network. After words have been transformed to their embeddings, they can be fed into subsequent classical network layers to model highly non-linear relations: zl = fl(Mlzl−1 + bl) (1) where zl is the output of lth layer, Ml is a |zl| × |zl−1| matrix, bl is a |zl|-length vector, and fl is an activation function. Except for the last layer, fl must be non-linear. Common choices for fl include sigmoid function, hyperbolic function, “hard” hyperbolic function etc. Following (Collobert et al., 2011), we choose “hard” hyperbolic function as our activation function in this work: htanh(x) =    1 if x is greater than 1 −1 if x is less than -1 x otherwise (2) If probabilistic interpretation is desired, a softmax layer (Bridle, 1990) can be used to do normalization: zl i = ezl−1 i |zl| P j=1 ezl−1 j (3) The above layers can only handle fixed sized input and output. If input must be of variable length, convolution layer and max layer can be used, (Collobert et al., 2011) which transform variable length input to fixed length vector for further processing. Multi-layer neural networks are trained with the standard back propagation algorithm (LeCun, 1985). As the networks are non-linear and the task specific objectives usually contain many local maximums, special care must be taken in the optimization process to obtain good parameters. Techniques such as layerwise pre-training(Bengio et al., 2007) and many tricks(LeCun et al., 1998) have been developed to train better neural networks. Besides that, neural network training also involves some hyperparameters such as learning rate, the number of hidden layers. We will address these issues in section 4. 4 DNN for word alignment Our DNN word alignment model extends classic HMM word alignment model (Vogel et al., 1996). Given a sentence pair (e, f), HMM word alignment takes the following form: P(a, e|f) = |e| Y i=1 Plex(ei|fai)Pd(ai −ai−1) (4) where Plex is the lexical translation probability and Pd is the jump distance distortion probability. One straightforward way to integrate DNN into HMM is to use neural network to compute the emission (lexical translation) probability Plex. Such approach requires a softmax layer in the neural network to normalize over all words in source vocabulary. As vocabulary for natural languages is usually very large, it is prohibitively expensive to do the normalization. Hence we give up the probabilistic interpretation and resort to a nonprobabilistic, discriminative view: sNN(a|e, f) = |e| Y i=1 tlex(ei, fai|e, f)td(ai, ai−1|e, f) (5) where tlex is a lexical translation score computed by neural network, and td is a distortion score. In the classic HMM word alignment model, context is not considered in the lexical translation probability. Although we can rewrite Plex(ei|fai) to Plex(ei|context of fai) to model context, it introduces too many additional parameters and leads to serious over-fitting problem due to data sparseness. As a matter of fact, even without any contexts, the lexical translation table in HMM already contains O(|Ve| ∗|Vf|) parameters, where |Ve| and Vf denote source and target vocabulary sizes. In contrast, our model does not maintain a separate translation score parameters for every source-target word pair, but computes tlex through a multi-layer network, which naturally handles contexts on both sides without explosive growth of number of parameters. 168 Input Source window e Target window f ) ( 3 2 3 b z M   ) ( 2 1 2 b z M   i i-1 i+1 j-1 j j+1 Lookup LT 0z Layer f1 1z Layer f2 2z 农民 伊布拉 说 farmer yibula said ) ( 1 0 1 b z M   htanh htanh Layer f3 ) , | , ( f e f e t j i lex Figure 2: Network structure for computing context dependent lexical translation scores. The example computes translation score for word pair (yibula, yibulayin) given its surrounding context. Figure 2 shows the neural network we used to compute context dependent lexical translation score tlex. For word pair (ei, fj), we take fixed length windows surrounding both ei and fj as input: (ei−sw 2 , . . . , ei+ sw 2 , fj−tw 2 , . . . , fj+ tw 2 ), where sw, tw stand window sizes on source and target side respectively. Words are converted to embeddings using the lookup table LT, and the catenation of embeddings are fed to a classic neural network with two hidden-layers, and the output of the network is the our lexical translation score: tlex(ei, fj|e, f) = f3 ◦f2 ◦f1 ◦LT(window(ei), window(fj)) (6) f1 and f2 layers use htanh as activation functions, while f3 is only a linear transformation with no activation function. For the distortion td, we could use a lexicalized distortion model: td(ai, ai−1|e, f) = td(ai −ai−1|window(fai−1)) (7) which can be computed by a neural network similar to the one used to compute lexical translation scores. If we map jump distance (ai −ai−1) to B buckets, we can change the length of the output layer to B, where each dimension in the output stands for a different bucket of jump distances. But we found in our initial experiments on small scale data, lexicalized distortion does not produce better alignment over the simple jumpdistance based model. So we drop the lexicalized distortion and reverse to the simple version: td(ai, ai−1|e, f) = td(ai −ai−1) (8) Vocabulary V of our alignment model consists of a source vocabulary Ve and a target vocabulary Vf. As in (Collobert et al., 2011), in addition to real words, each vocabulary contains a special unknown word symbol ⟨unk⟩to handle unseen words; two sentence boundary symbols ⟨s⟩and ⟨/s⟩, which are filled into surrounding window when necessary; furthermore, to handle null alignment, we must also include a special null symbol ⟨null⟩. When fj is null word, we simply fill the surrounding window with the identical null symbols. To decode our model, the lexical translation scores are computed for each source-target word pair in the sentence pair, which requires going through the neural network (|e| × |f|) times; after that, the forward-backward algorithm can be used to find the viterbi path as in the classic HMM model. The majority of tunable parameters in our model resides in the lookup table LT, which is a (L × (|Ve| + |Vf|))-dimension matrix. For a reasonably large vocabulary, the number is much smaller than the number of parameters in classic HMM model, which is in the order of (|Ve|×|Vf|). 1 The ability to model context is not unique to our model. In fact, discriminative word alignment can model contexts by deploying arbitrary features (Moore, 2005). Different from previous discriminative word alignment, our model does not use manually engineered features, but learn “features” automatically from raw words by the neural network. (Berger et al., 1996) use a maximum entropy model to model the bag-of-words context for word alignment, but their model treats each word as a distinct feature, which can not leverage the similarity between words as our model. 5 Training Although unsupervised training technique such as Contrastive Estimation as in (Smith and Eisner, 2005), (Dyer et al., 2011) can be adapted to train 1In practice, the number of non-zero parameters in classic HMM model would be much smaller, as many words do not co-occur in bilingual sentence pairs. In our experiments, the number of non-zero parameters in classic HMM model is about 328 millions, while the NN model only has about 4 millions. 169 our model from raw sentence pairs, they are too computational demanding as the lexical translation probabilities must be computed from neural networks. Hence, we opt for a simpler supervised approach, which learns the model from sentence pairs with word alignment. As we do not have a large manually word aligned corpus, we use traditional word alignment models such as HMM and IBM model 4 to generate word alignment on a large parallel corpus. We obtain bidirectional alignment by running the usual growdiag-final heuristics (Koehn et al., 2003) on unidirectional results from both directions, and use the results as our training data. Similar approach has been taken in speech recognition task (Dahl et al., 2012), where training data for neural network model is generated by forced decoding with traditional Gaussian mixture models. Tunable parameters in neural network alignment model include: word embeddings in lookup table LT, parameters W l, bl for linear transformations in the hidden layers of the neural network, and distortion parameters sd of jump distance. We take the following ranking loss with margin as our training criteria: loss(θ) = X every (e,f) max{0, 1 −sθ(a+|e, f) + sθ(a−|e, f)} (9) where θ denotes all tunable parameters, a+ is the gold alignment path, a−is the highest scoring incorrect alignment path under θ, and sθ is model score for alignment path defined in Eq. 5 . One nuance here is that the gold alignment after grow-diag-final contains many-to-many links, which cannot be generated by any path. Our solution is that for each source word alignment multiple target, we randomly choose one link among all candidates as the golden link. Because our multi-layer neural network is inherently non-linear and is non-convex, directly training against the above criteria is unlikely to yield good results. Instead, we take the following steps to train our model. 5.1 Pre-training initial word embedding with monolingual data Most parameters reside in the word embeddings. To get a good initial value, the usual approach is to pre-train the embeddings on a large monolingual corpus. We replicate the work in (Collobert et al., 2011) and train word embeddings for source and target languages from their monolingual corpus respectively. Our vocabularies Vs and Vt contain the most frequent 100,000 words from each side of the parallel corpus, and all other words are treated as unknown words. We set word embedding length to 20, window size to 5, and the length of the only hidden layer to 40. Follow (Turian et al., 2010), we randomly initialize all parameters to [-0.1, 0.1], and use stochastic gradient descent to minimize the ranking loss with a fixed learning rate 0.01. Note that embedding for null word in either Ve and Vf cannot be trained from monolingual corpus, and we simply leave them at the initial value untouched. Word embeddings from monolingual corpus learn strong syntactic knowledge of each word, which is not always desirable for word alignment between some language pairs like English and Chinese. For example, many Chinese words can act as a verb, noun and adjective without any change, while their English counter parts are distinct words with quite different word embeddings due to their different syntactic roles. Thus we have to modify the word embeddings in subsequent steps according to bilingual data. 5.2 Training neural network based on local criteria Training the network against the sentence level criteria Eq. 5 directly is not efficient. Instead, we first ignore the distortion parameters and train neural networks for lexical translation scores against the following local pairwise loss: max{0, 1 −tθ((e, f)+|e, f) + tθ((e, f)−|e, f)} (10) where (e, f)+ is a correct word pair, (e, f)−is a wrong word pair in the same sentence, and tθ is as defined in Eq. 6 . This training criteria essentially means our model suffers loss unless it gives correct word pairs a higher score than random pairs from the same sentence pair with some margin. We initialize the lookup table with embeddings obtained from monolingual training, and randomly initialize all W l and bl in linear layers to [-0.1, 0.1]. We minimize the loss using stochastic gradient descent as follows. We randomly cycle through all sentence pairs in training data; for each correct word pair (including null alignment), we generate a positive example, and generate two negative examples by randomly corrupting either 170 side of the pair with another word in the sentence pair. We set learning rate to 0.01. As there is no clear stopping criteria, we simply run the stochastic optimizer through parallel corpus for N iterations. In this work, N is set to 50. To make our model concrete, there are still hyper-parameters to be determined: the window size sw and tw, the length of each hidden layer Ll. We empirically set sw and tw to 11, L1 to 120, and L2 to 10, which achieved a minimal loss on a small held-out data among several settings we tested. 5.3 Training distortion parameters We fix neural network parameters obtained from the last step, and tune the distortion parameters sd with respect to the sentence level loss using standard stochastic gradient descent. We use a separate parameter for jump distance from -7 and 7, and another two parameters for longer forward/backward jumps. We initialize all parameters in sd to 0, set the learning rate for the stochastic optimizer to 0.001. As there are only 17 parameters in sd, we only need to run the optimizer over a small portion of the parallel corpus. 5.4 Tuning neural network based on sentence level criteria Up-to-now, parameters in the lexical translation neural network have not been trained against the sentence level criteria Eq. 5. We could achieve this by re-using the same online training method used to train distortion parameters, except that we now fix the distortion parameters and let the loss back-propagate through the neural networks. Sentence level training does not take larger context in modeling word translations, but only to optimize the parameters regarding to the sentence level loss. This tuning is quite slow, and it did not improve alignment on an initial small scale experiment; so, we skip this step in all subsequent experiment in this work. 6 Experiments and Results We conduct our experiment on Chinese-to-English word alignment task. We use the manually aligned Chinese-English alignment corpus (Haghighi et al., 2009) which contains 491 sentence pairs as test set. We adapt the segmentation on the Chinese side to fit our word segmentation standard. 6.1 Data Our parallel corpus contains about 26 million unique sentence pairs in total which are mined from web. The monolingual corpus to pre-train word embeddings are also crawled from web, which amounts to about 1.1 billion unique sentences for English and about 300 million unique sentences for Chinese. As pre-processing, we lowercase all English words, and map all numbers to one special token; and we also map all email addresses and URLs to another special token. 6.2 Settings We use classic HMM and IBM model 4 as our baseline, which are generated by Giza++ (Och and Ney, 2000). We train our proposed model from results of classic HMM and IBM model 4 separately. Since classic HMM, IBM model 4 and our model are all uni-directional, we use the standard growdiag-final to generate bi-directional results for all models. Models are evaluated on the manually aligned test set using standard metric: precision, recall and F1-score. 6.3 Alignment Result It can be seen from Table 1, the proposed model consistently outperforms its corresponding baseline whether it is trained from alignment of classic HMM or IBM model 4. It is also clear that the setting prec. recall F-1 HMM 0.768 0.786 0.777 HMM+NN 0.810 0.790 0.798 IBM4 0.839 0.805 0.822 IBM4+NN 0.885 0.812 0.847 Table 1: Word alignment result. The first row and third row show baseline results obtained by classic HMM and IBM4 model. The second row and fourth row show results of the proposed model trained from HMM and IBM4 respectively. results of our model also depends on the quality of baseline results, which is used as training data of our model. In future we would like to explore whether our method can improve other word alignment models. We also conduct experiment to see the effect on end-to-end SMT performance. We train hier171 archical phrase model (Chiang, 2007) from different word alignments. Despite different alignment scores, we do not obtain significant difference in translation performance. In our C-E experiment, we tuned on NIST-03, and tested on NIST08. Case-insensitive BLEU-4 scores on NIST-08 test are 0.305 and 0.307 for models trained from IBM-4 and NN alignment results. The result is not surprising considering our parallel corpus is quite large, and similar observations have been made in previous work as (DeNero and Macherey, 2011) that better alignment quality does not necessarily lead to better end-to-end result. 6.4 Result Analysis 6.4.1 Error Analysis From Table 1 we can see higher F-1 score of our model mainly comes from higher precision, with recall similar to baseline. By analyzing the results, we found out that for both baseline and our model, a large part of missing alignment links involves stop words like English words “the”, “a”, “it” and Chinese words “de”. Stop words are inherently hard to align, which often requires grammatical judgment unavailable to our models; as they are also extremely frequent, our model fully learns their alignment patterns of the baseline models, including errors. On the other hand, our model performs better on low-frequency words, especially proper nouns. Take person names for example. Most names are low-frequency words, on which baseline HMM and IBM4 models show the “garbage collector” phenomenon. In our model, different person names have very similar word embeddings on both English side and Chinese side, due to monolingual pre-training; what is more, different person names often appear in similar contexts. As our model considers both word embeddings and contexts, it learns that English person names should be aligned to Chinese person names, which corrects errors of baseline models and leads to better precision. 6.4.2 Effect of context To examine how context contribute to alignment quality, we re-train our model with different window size, all from result of IBM model 4. From Figure 3, we can see introducing context increase the quality of the learned alignment, but the benefit is diminished for window size over 5. On the other hand, the results are quite stable even with large window size 13, without noticeable over0.74 0.76 0.78 0.8 0.82 0.84 0.86 1 3 5 7 9 11 13 Figure 3: Effect of different window sizes on word alignment F-score. fitting problem. This is not surprising considering that larger window size only requires slightly more parameters in the linear layers. Lastly, it is worth noticing that our model with no context (window size 1) performs much worse than settings with larger window size and baseline IBM4. Our explanation is as follows. Our model uses the simple jump distance based distortion, which is weaker than the more sophisticated distortions in IBM model 4; thus without context, it does not perform well compared to IBM model 4. With larger window size, our model is able to produce more accurate translation scores based on more contexts, which leads to better alignment despite the simpler distortions. IBM4+NN F-1 1-hidden-layer 0.834 2-hidden-layer 0.847 3-hidden-layer 0.843 Table 3: Effect of different number of hidden layers. Two hidden layers outperform one hidden layer, while three hidden layers do not bring further improvement. 6.4.3 Effect of number of hidden layers Our neural network contains two hidden layers besides the lookup layer. It is natural to ask whether adding more layers would be beneficial. To answer this question, we train models with 1, 2 and 3 layers respectively, all from result of IBM model 4. For 1-hidden-layer setting, we set the hidden layer length to 120; and for 3-hidden-layer setting, we set hidden layer lengths to 120, 100, 10 respectively. As can be seen from Table 3, 2hidden-layer outperforms the 1-hidden-layer setting, while another hidden layer does not bring 172 word good history british served labs zetian laggards LM bad tradition russian worked networks hongzhang underperformers great culture japanese lived technologies yaobang transferees strong practice dutch offered innovations keming megabanks true style german delivered systems xingzhi mutuals easy literature canadian produced industries ruihua non-starters WA nice historical uk offering lab hongzhang underperformers great historic britain serving laboratories qichao illiterates best developed english serve laboratory xueqin transferees pretty record classic delivering exam fuhuan matriculants excellent recording england worked experiments bingkun megabanks Table 2: Nearest neighbors of several words according to their embedding distance. LM shows neighbors of word embeddings trained by monolingual language model method; WA shows neighbors of word embeddings trained by our word alignment model. improvement. Due to time constraint, we have not tuned the hyper-parameters such as length of hidden layers in 1 and 3-hidden-layer settings, nor have we tested settings with more hidden-layers. It would be wise to test more settings to verify whether more layers would help. 6.4.4 Word Embedding Following (Collobert et al., 2011), we show some words together with its nearest neighbors using the Euclidean distance between their embeddings. As we can see from Table 2, after bilingual training, “bad” is no longer in the nearest neighborhood of “good” as they hold opposite semantic meanings; the nearest neighbor of “history” is now changed to its related adjective “historical”. Neighbors of proper nouns such as person names are relatively unchanged. For example, neighbors of word “zetian” are all Chinese names in both settings. As Chinese language lacks morphology, the single form and plural form of a noun in English often correspond to the same Chinese word, thus it is desirable that the two English words should have similar word embeddings. While this is true for relatively frequent nouns such as “lab” and “labs”, rarer nouns still remain near their monolingual embeddings as they are only modified a few times during the bilingual training. As shown in last column, neighborhood of “laggards” still consists of other plural forms even after bilingual training. 7 Conclusion In this paper, we explores applying deep neural network for word alignment task. Our model integrates a multi-layer neural network into an HMM-like framework, where context dependent lexical translation score is computed by neural network, and distortion is modeled by a simple jump-distance scheme. Our model is discriminatively trained on bilingual corpus, while huge monolingual data is used to pre-train wordembeddings. Experiments on large-scale Chineseto-English task show that the proposed method produces better word alignment results, compared with both classic HMM model and IBM model 4. For future work, we will investigate more settings of different hyper-parameters in our model. Secondly, we want to explore the possibility of unsupervised training of our neural word alignment model, without reliance of alignment result of other models. Furthermore, our current model use rather simple distortions; it might be helpful to use more sophisticated model such as ITG (Wu, 1997), which can be modeled by Recursive Neural Networks (Socher et al., 2011). Acknowledgments We thank anonymous reviewers for insightful comments. We also thank Dongdong Zhang, Lei Cui, Chunyang Wu and Zhenyan He for fruitful discussions. References Yoshua Bengio, Holger Schwenk, Jean-S´ebastien Sen´ecal, Fr´ederic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. Innovations in Machine Learning, pages 137–186. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and 173 Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19:153. Yoshua Bengio. 2009. Learning deep architectures for ai. Foundations and Trends R⃝in Machine Learning, 2(1):1–127. Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Comput. Linguist., 22(1):39–71, March. JS Bridle. 1990. Neurocomputing: Algorithms, architectures and applications, chapter probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. David Chiang. 2007. Hierarchical phrase-based translation. computational linguistics, 33(2):201–228. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. George E Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):30–42. John DeNero and Klaus Macherey. 2011. Modelbased aligner combination using dual decomposition. In Proc. ACL. Chris Dyer, Jonathan Clark, Alon Lavie, and Noah A Smith. 2011. Unsupervised word alignment with arbitrary features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 409–419. Association for Computational Linguistics. Aria Haghighi, John Blitzer, John DeNero, and Dan Klein. 2009. Better word alignments with supervised itg models. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2Volume 2, pages 923–931. Association for Computational Linguistics. Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554. Koray Kavukcuoglu, Pierre Sermanet, Y-Lan Boureau, Karol Gregor, Micha¨el Mathieu, and Yann LeCun. 2010. Learning convolutional feature hierarchies for visual recognition. Advances in Neural Information Processing Systems, pages 1090–1098. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324. Yann LeCun. 1985. A learning scheme for asymmetric threshold networks. Proceedings of Cognitiva, 85:599–604. Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y Ng. 2007. Efficient sparse coding algorithms. Advances in neural information processing systems, 19:801. Shujie Liu, Chi-Ho Li, and Ming Zhou. 2010. Discriminative pruning for discriminative itg alignment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL, volume 10, pages 316–324. Y MarcAurelio Ranzato, Lan Boureau, and Yann LeCun. 2007. Sparse feature learning for deep belief networks. Advances in neural information processing systems, 20:1185–1192. Robert C Moore. 2005. A discriminative framework for bilingual word alignment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 81–88. Association for Computational Linguistics. Jan Niehues and Alex Waibel. 2012. Continuous space language models using restricted boltzmann machines. In Proceedings of the nineth International Workshop on Spoken Language Translation (IWSLT). Franz Josef Och and Hermann Ney. 2000. Giza++: Training of statistical translation models. Frank Seide, Gang Li, and Dong Yu. 2011. Conversational speech transcription using context-dependent deep neural networks. In Proc. Interspeech, pages 437–440. 174 Noah A Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 354–362. Association for Computational Linguistics. Richard Socher, Cliff C Lin, Andrew Y Ng, and Christopher D Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 26th International Conference on Machine Learning (ICML), volume 2, page 7. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics. Le Hai Son, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of the 2012 conference of the north american chapter of the association for computational linguistics: Human language technologies, pages 39–48. Association for Computational Linguistics. Ivan Titov, Alexandre Klementiev, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. Urbana, 51:61801. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics-Volume 2, pages 836– 841. Association for Computational Linguistics. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3):377–403. 175
2013
17
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1733–1743, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Why-Question Answering using Intra- and Inter-Sentential Causal Relations Jong-Hoon Oh∗ Kentaro Torisawa† Chikara Hashimoto ‡ Motoki Sano§ Stijn De Saeger¶ Kiyonori Ohtake∥ Information Analysis Laboratory Universal Communication Research Institute National Institute of Information and Communications Technology (NICT) {∗rovellia,† torisawa,‡ ch,§ msano,¶stijn,∥kiyonori.ohtake}@nict.go.jp Abstract In this paper, we explore the utility of intra- and inter-sentential causal relations between terms or clauses as evidence for answering why-questions. To the best of our knowledge, this is the first work that uses both intra- and inter-sentential causal relations for why-QA. We also propose a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. (2012). By applying these ideas to Japanese why-QA, we improved precision by 4.4% against all the questions in our test set over the current state-of-theart system for Japanese why-QA. In addition, unlike the state-of-the-art system, our system could achieve very high precision (83.2%) for 25% of all the questions in the test set by restricting its output to the confident answers only. 1 Introduction “Why-question answering” (why-QA) is a task to retrieve answers from a given text archive for a why-question, such as “Why are tsunamis generated?” The answers are usually text fragments consisting of one or more sentences. Although much research exists on this task (Girju, 2003; Higashinaka and Isozaki, 2008; Verberne et al., 2008; Verberne et al., 2011; Oh et al., 2012), its performance remains much lower than that of the state-of-the-art factoid QA systems, such as IBM’s Watson (Ferrucci et al., 2010). In this work, we propose a quite straightforward but novel approach for such difficult whyQA task. Consider the sentence A1 in Table 1, which represents the causal relation between the cause, “the ocean’s water mass ..., waves are genA1 [Tsunamis that can cause large coastal inundation are generated]effect because [the ocean’s water mass is displaced and, much like throwing a stone into a pond, waves are generated.]cause A2 [Earthquake causes seismic waves which set up the water in motion with a large force.]cause This causes [a tsunami.]effect A3 [Tsunamis]effect are caused by [the sudden displacement of huge volumes of water.]cause A4 [Tsunamis weaken as they pass through forests]effect because [the hydraulic resistance of the trees diminish their energy.]cause A5 [Automakers in Japan suspended production for an array of vehicles]effect because [the magnitude 9 earthquake and tsunami hit their country on Friday, March 11, 2011.]cause Table 1: Examples of intra/inter-sentential causal relations. Cause and effect parts of each causal relation, marked with [..]cause and [..]effect, are connected by the underlined cue phrases for causality, such as because, this causes, and are caused by. erated,” and its effect, “Tsunamis ... are generated.” This is a good answer to the question, “Why are tsunamis generated?”, since the effect part is more or less equivalent to the (propositional) content of the question. Our method finds text fragments that include such causal relations with an effect part that resembles a given question and provides them as answers. Since this idea looks quite intuitive, many people would probably consider it as a solution to why-QA. However, to our surprise, we could not find any previous work on why-QA that took this approach. Some methods utilized the causal relations between terms as evidence for finding answers (i.e., matching a cause term with an answer text and its effect term with a question) (Girju, 2003; Higashinaka and Isozaki, 2008). Other approaches utilized such clue terms for causality as “because” as evidence for finding answers (Murata et al., 2007). However, these algorithms did not check whether an answer candidate, i.e., a text fragment that may be provided as an answer, explicitly contains a complex causal relation sen1733 tence with the effect part that resembles a question. For example, A5 in Table 1 is an incorrect answer to “Why are tsunamis generated?”, but these previous approaches would probably choose it as a proper answer due to “because” and “earthquake” (i.e., a cause of tsunamis). At least in our experimental setting, our approach outperformed these simpler causality-based QA systems. Perhaps this approach was previously deemed infeasible due to two non-trivial technical challenges. The first challenge is to accurately identify a wide range of causal relations like those in Table 1 in answer candidates. To meet this challenge, we developed a sequence labeling method that identifies not only intra-sentential causal relations, i.e., the causal relations between two terms/phrases/clauses expressed in a single sentence (e.g., A1 in Table 1), but also the intersentential causal relations, which are the causal relations between two terms/phrases/clauses expressed in two adjacent sentences (e.g., A2) in a given text fragment. The second challenge is assessing the appropriateness of each identified causal relation as an answer to a given question. This is important since the causal relations identified in the answer candidates may have nothing to do with a given question. In this case, we have to reject these causal relations because they are inappropriate as an answer to the question. When a single answer candidate contains many causal relations, we also have to select the appropriate ones. Consider the causal relations in A1–A4. Those in A1–A3 are appropriate answers to “Why are tsunamis generated?”, but not the one in A4. To assess the appropriateness, the system must recognize textual entailment, i.e., “tsunamis (are) generated” in the question is entailed by all “tsunamis are generated” in A1, “cause a tsunami” in A2 and “tsunamis are caused” in A3 but not by “tsunamis weaken” in A4. This quite difficult task is currently being studied by many researchers in the RTE field (Androutsopoulos and Malakasiotis, 2010; Dagan et al., 2010; Shima et al., 2011; Bentivogli et al., 2011). To meet this challenge, we developed a relatively simple method that can be seen as a lightweight approximation for this difficult RTE task, using excitation polarities (Hashimoto et al., 2012). Through our experiments on Japanese why-QA, we show that a combination of the above methods can improve why-QA accuracy. In addition, our proposed method can be successfully combined with other approaches to why-QA and can contribute to higher accuracy. As a final result, we improved the precision by 4.4% against all the questions in our test set over the current state-of-the-art system of Japanese why-QA (Oh et al., 2012). The difference in the performance became much larger when we only compared the highly confident answers of each system. When we made our system provide only its confident answers according to their confidence score given by our system, the precision of these confident answers was 83.2% for 25% of all the questions in our test set. In the same setting, the precision of the state-of-the-art system (Oh et al., 2012) was only 62.4%. 2 Related Work Although there were many previous works on the acquisition of intra- and inter-sentential causal relations from texts (Khoo et al., 2000; Girju, 2003; Inui and Okumura, 2005; Chang and Choi, 2006; Torisawa, 2006; Blanco et al., 2008; De Saeger et al., 2009; De Saeger et al., 2011; Riaz and Girju, 2010; Do et al., 2011; Radinsky et al., 2012), their application to why-QA was limited to causal relations between terms (Girju, 2003; Higashinaka and Isozaki, 2008). As previous attempts to improve why-QA performance, such semantic knowledge as WordNet synsets (Verberne et al., 2011), semantic word classes (Oh et al., 2012), sentiment analysis (Oh et al., 2012), and causal relations between terms (Girju, 2003; Higashinaka and Isozaki, 2008) has been used. These previous studies took basically bag-of-words approaches and used the semantic knowledge to identify certain semantic associations using terms and n-grams. On the other hand, our method explicitly identifies intra- and inter-sentential causal relations between terms/phrases/clauses that have complex structures and uses the identified relations to answer a why-question. In other words, our method considers more complex linguistic structures than those used in the previous studies. Note that our method can complement the previous approaches. Through our experiments, we showed that it is possible to achieve a higher precision by combining our proposed method with bag-of-words approaches considering semantic word classes and sentiment analysis in our previous work (Oh et al., 1734 Document  retrieval  from   Japanese  web  texts   Answer  candidate  extrac8on Answer  candidate  extrac8on     from  the  retrieved  documents   Answer  re-­‐ranker   Answer  re-­‐ranking top-n answer   candidates  by  answer   re-­‐ranking   Training  data  for   answer  re-­‐ranking   Training  data  for   causal  rela8on   recogni8on   Causal  rela8on   recogni8on  model   top-n answer   candidates   training   training   Why-­‐ques8on   recogni8on     recogni8on     Figure 1: System architecture 2012). 3 System Architecture We first describe the system architecture of our QA system before describing our proposed method. It is composed of two components: answer candidate extraction and answer re-ranking (Fig. 1). This architecture is basically the same as that used in our previous work (Oh et al., 2012). We extended our previous work by introducing causal relations recognized from answer candidates to the answer re-ranking. The features used in our previous work are very different from those in this work, and we found that combining both improves accuracy. Answer candidate extraction: In our previous work, we implemented the method of Murata et al. (2007) for our answer candidate extractor. We retrieved documents from Japanese web texts using Boolean AND and OR queries generated from the content words in why-questions. Then we extracted passages of five sentences from these retrieved documents and ranked them with the ranking function proposed by Murata et al. (2007). This method ranks a passage higher when it contains more query terms that are closer to each other in the passage. We used a set of clue terms, including the Japanese counterparts of cause and reason, as query terms for the ranking. The top ranked passages are regarded as answer candidates in the answer re-ranking. See Murata et al. (2007) for more details. Answer re-ranking: Re-ranking the answer candidates is done by a supervised classifier (SVMs) (Vapnik, 1995). In our previous work, we employed three types of features for training the re-ranker: morphosyntactic features (n-grams of morphemes and syntactic dependency chains), semantic word class features (semantic word classes obtained by automatic word clustering (Kazama and Torisawa, 2008)) and sentiment polarity features (word and phrase polarities). Here, we used semantic word classes and sentiment polarities for identifying such semantic associations between a why-question and its answer as “if a disease’s name appears in a question, then answers that include nutrient names are more likely to be correct” by semantic word classes and “if something undesirable happens, the reason is often also something undesirable” by sentiment polarities. In this work, we propose causal relation features generated from intra- and inter-sentential causal relations in answer candidates and use them along with the features proposed in our previous work for training our re-ranker. 4 Causal Relations for Why-QA We describe causal relation recognition in Section 4.1 and describe the features (of our re-ranker) generated from causal relations in Section 4.2. 4.1 Causal Relation Recognition We restrict causal relations to those expressed by such cue phrases for causality as (the Japanese counterparts of) because and as a result like in the previous work (Khoo et al., 2000; Inui and Okumura, 2005) and recognize them in the following two steps: extracting causal relation candidates and recognizing causal relations from these candidates. 4.1.1 Extracting Causal Relation Candidates We identify cue phrases for causality in answer candidates using the regular expressions in Table 2. Then, for each identified cue phrase, we extract three sentences as a causal relation candidate, where one contains the cue phrase and the other two are the previous and next sentences in the answer candidates. When there is more than one cue phrase in an answer candidate, we use all of them for extracting the causal relation candidates, assuming that each of the cue phrases is linked to different causal relations. We call a cue phrase used for extracting a causal relation candidate a c-marker (causality marker) of the candidate to distinguish it from the other cue phrases in the same causal relation candidate. 1735 Regular expressions Examples (D|の)? ためP? ため(for), のため(for), そのため (as a result), のために(for) ので ので(since or because of) こと(から|で) ことから(from the fact that), こと で(by the fact that) (から|ため) C からだ(because), ためだ(It is because) D? RCT (P|C)+ 理由は(the reason is), 原因だ (is the cause),この理由から(from this reason) Table 2: Regular expressions for identifying cue phrases for causality. D, P and C represent demonstratives (e.g., この(this) and その(that)), postpositions (including case markers such as が (nominative), の(genitive)), and copula (e.g., で す(is) and である(is)) in Japanese, respectively. RCT, which represents Japanese terms meaning reason, cause, or thanks to, is defined as follows: RCT = {理由(reason), 原因(cause), 要 因(cause), 引き金(cause), おかげ(thanks to), せい(thanks to), わけ(reason) }. 4.1.2 Recognizing Causal Relations Next, we recognize the spans of the cause and effect parts of a causal relation linked to a c-marker. We regard this task as a sequence labeling problem and use Conditional Random Fields (CRFs) (Lafferty et al., 2001) as a machine learning framework. In our task, CRFs take three sentences of a causal relation candidate as input and generate their cause-effect annotations with a set of possible cause-effect IOB labels, including BeginCause (B-C), Inside-Cause (I-C), Begin-Effect (BE), Inside-Effect (I-E), and Outside (O). Fig 2 shows an example of such sequence labeling. Although this example is about sequential labeling shown on English sentences for ease of explanation, it was actually done on Japanese sentences. We used the three types of feature sets in Table 3 for training the CRFs, where j is in the range of i −4 ≤j ≤i + 4 for current position i in a causal relation candidate. Type Features Morphological feature mj, mj+1 j , posj, posj+1 j Syntactic feature sj, sj+1 j , bj, bj+1 j C-marker feature (mj, cm), (mj+1 j , cm) (sj, cm), (sj+1 j , cm) Table 3: Features for training CRFs, where xj+1 j = xjxj+1 Morphological features: mj and posj in Table 3 represent the jth morpheme and the POS tag. S1:  Earthquake  causes  seismic  waves  which  set  up   the  water  in  mo7on  with  a  large  force.  EOS   S2:  This  causes  a  tsunami.  EOS   S3:  EOA   S1   Earthquake   causes   …   with   a   large   force   .   EOS   IOB   B-­‐C   I-­‐C   I-­‐C   I-­‐C   I-­‐C   I-­‐C   I-­‐C   I-­‐C   O   S2   This   causes   a   tsunami   .   EOS   IOB   O   O   B-­‐E   I-­‐E   I-­‐E   O   S3   EOA   IOB   O   CRFs   A  causal  rela7on  candidate  from  A2   Figure 2: Recognizing causal relations by sequence labeling: Underlined text This causes represents a c-marker, and EOS and EOA represent end-of-sentence and end-of-answer candidates. 水が   氷に   なると   体積が   増加するため、   氷山は   水に   浮くことができる   water   ice   if  (it)   becomes   its   volume   because  (it)     increases   an   iceberg   water   float  on  (water)   Subtree  informaon  used  for  syntacc  features     subtree   subtree   child   child   c-­‐marker   subtree -­‐of-­‐ parent   subtree -­‐of-­‐ parent   parent   増加するため、   なると   [水が氷になると体積が増加する]causeため、[氷山は水に浮くことができる]effect   (Because  [the  volume  of  the  water  increases  if  it  becomes  ice]cause,  [an  iceberg  floats   on  water]effect.)   浮くことができる   root   c-­‐marker  node   Figure 3: Example of syntactic information related to a c-marker used for syntactic features We use JUMAN1, a Japanese morphological analyzer, for generating our morphological features. Syntactic features: The span of the causal relations in a given causal relation candidate strongly depends on the c-marker in the candidate. Especially for intra-sentential causal relations, their cause and effect parts often appear in the subtrees of the c-marker’s node or those of the c-marker’s parent node in a syntactic dependency tree structure. Fig. 3 shows an example that follows this observation, where the c-marker node is represented in a hexagon and the other nodes are in a rectangle. Note that each node in Fig. 3 is a word phrase (called a bunsetsu), which is the smallest unit of syntactic analysis in Japanese. A bunsetsu is a syntactic constituent composed of a content word and several function words such as postpositions and case markers. Syntactic dependency is represented by an arrow in Fig. 3. For example, there is syntactic dependency from word phrase 水が 1 http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN 1736 (water) to なると(if (it) becomes), i.e., 水が dep −−→ なると. We encode this subtree information into sj, which is the syntactic information of a word phrase to which the jth morpheme belongs. sj only has one of six values: 1) the c-marker’s node (c-marker), 2) the c-marker’s child node (child), 3) the c-marker’s parent node (parent), 4) in the cmarker’s subtree but not the c-marker’s child node (subtree), 5) in the subtree of the c-marker’s parent node but not the c-marker’s node (subtree-ofparent) and 6) the others (others). bj is the word phrase information of the jth morpheme (mj) that represents whether mj is in the beginning or inside a word phrase. For generating our syntactic features, we use KNP2, a Japanese syntactic dependency parser. C-marker features: As our c-marker features, we use a pair composed of c-marker cm and one of the following: mj, mj+1 j , sj, or sj+1 j . 4.2 Causal Relation Features We use terms, partial trees (in a syntactic dependency tree structure), and the semantic orientation of excitation (Hashimoto et al., 2012) to assess the appropriateness of each causal relation obtained by our causal relation recognizer as an answer to a given question. Finding answers with term matching and partial tree matching has been used in the literature of question answering (Girju, 2003; Narayanan and Harabagiu, 2004; Moschitti et al., 2007; Higashinaka and Isozaki, 2008; Verberne et al., 2008; Surdeanu et al., 2011; Verberne et al., 2011; Oh et al., 2012), while that with the excitation polarity is proposed in this work. We use three types of features. Each feature type expresses the causal relations in an answer candidate that are determined to be appropriate as answers to a given question by term matching (tf1–tf4), partial tree matching (pf1– pf4) and excitation polarity matching (ef1–ef4). We call these causal relations used for generating our causal relation features candidates of an appropriate causal relation in this section. Note that if one answer candidate has more than one candidate of an appropriate causal relation found by one matching method, we generated features for each appropriate candidate and merged all of them for the answer candidate. 2 http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KNP Type Description tf1 word n-grams of causal relations tf2 word class version of tf1 tf3 indicator for the existence of candidates of an appropriate causal relation identified by term matching in an answer candidate tf4 number of matched terms in candidates of an appropriate causal relation pf1 syntactic dependency n-grams (n dependency chain) of causal relations pf2 word class version of pf1 pf3 indicator for the existence of candidates of an appropriate causal relation identified by partial tree matching in an answer candidate pf4 number of matched partial trees in candidates of an appropriate causal relation ef1 types of noun-polarity pairs shared by causal relations and the question ef2 ef1 coupled with each noun’s word class ef3 indicator for the existence of candidates of an appropriate causal relation identified by excitation polarity matching in an answer candidate ef4 number of noun-polarity pairs shared by the question and the candidates of an appropriate causal relation Table 4: Causal relation features: n in n-grams is n = {2, 3} and n-grams in an effect part are distinguished from those in a cause part. 4.2.1 Term Matching Our term matching method judges that a causal relation is a candidate of an appropriate causal relation if its effect part contains at least one content word (nouns, verbs, and adjectives) in the question. For example, all the causal relations of A1– A4 in Table 1 are candidates of an appropriate causal relation to the question, “Why is a tsunami generated?”, by term matching with question term tsunami. tf1–tf4 are generated from candidates of an appropriate causal relation identified by term matching. The n-grams of tf1 and tf2 are restricted to those containing at least one content word in a question. We distinguish this matched word from the other words by replacing it with QW, a special symbol representing a word in the question. For example, word 3-gram “this/cause/QW” is extracted from This causes tsunamis in A2 for “Why is a tsunami generated?” Further, we create a word class version of word n-grams by converting the words in these word n-grams into their corresponding word class using the semantic word classes (500 classes for 5.5 million nouns) from our previous work (Oh et al., 2012). These word classes were created by applying the automatic word clustering method of Kazama and Torisawa (2008) to 600 million Japanese web pages. For example, the word class version of word 3-gram 1737 “this/cause/QW” is “this/cause/QW,WCtsunami”, where WCtsunami represents the word class of a tsunami. tf3 is a binary feature that indicates the existence of candidates of an appropriate causal relation identified by term matching in an answer candidate. tf4 represents the degree of the relevance of the candidates of an appropriate causal relation measured by the number of matched terms: one, two, and more than two. 4.2.2 Partial Tree Matching Our partial tree matching method judges a causal relation as a candidate of an appropriate causal relation if its effect part contains at least one partial tree in a question, where the partial tree covers more than one content word. For example, only the causal relation A1 among A1–A4 is a candidate of an appropriate causal relation for question “Why are tsunamis generated?” by partial tree matching because only its effect part contains partial tree “tsunamis dep −−→(are) generated” of the question. pf1–pf4 are generated from candidates of an appropriate causal relation identified by the partial tree matching. The syntactic dependency ngrams in pf1 and pf2 are restricted to those that contain at least one content word in a question. We distinguish this matched content word from the other content words in the n-gram by converting it to QW, which represents a content word in the question. For example, syntactic dependency 2gram “QW dep −−→cause” and its word class version “QW,WCtsunami dep −−→cause” are extracted from Tsunamis that can cause in A1. pf3 is a binary feature that indicates whether an answer candidate contains candidates of an appropriate causal relation identified by partial tree matching. pf4 represents the degree of the relevance of the candidate of an appropriate causal relation measured by the number of matched partial trees: one, two, and more than two. 4.2.3 Excitation Polarity Matching Hashimoto et al. (2012) proposed a semantic orientation called excitation polarities. It classifies predicates with their argument position (called templates) into excitatory, inhibitory and neutral. In the following, we denote a template as “[argument position,predicate].” According to Hashimoto’s definition, excitatory templates imply that the function, effect, purpose, or the role of an entity filling an argument position in the templates is activated/enhanced. On the contrary, inhibitory templates imply that the effect, purpose or the role of an entity is deactivated/suppressed. Neutral templates are those that neither activate nor suppress the function of an argument. We assume that the meanings of a text can be roughly captured by checking whether each noun in the text is activated or suppressed in the sense of the excitation polarity framework, where the activation and suppression of each entity (or noun) can be detected by looking at the excitation polarities of the templates that are filled by the entity. For instance, effect part “tsunamis that can cause large coastal inundation are generated” of A1 roughly means that “tsunamis” are activated and “inundation” is (or can be) activated. This activation/suppression configuration of the nouns is consistent with sentence “tsunamis are caused” in which “tsunamis” are activated. This consistency suggests that A1 is a good answer to question “Why are tsunamis caused?”, although the “tsunamis” are modified by different predicates; “cause” and “generate.” On the other hand, effect part “tsunamis weaken as they pass through forests” of A4 implies that “tsunamis” are suppressed. This suggests that A4 is not a good answer to “Why are tsunamis caused?” Note that the consistency checking between activation/suppression configurations of nouns3 in texts can be seen as a rough but lightweight approximation of the recognition of textual entailments or paraphrases. Following the definition of excitation polarity in Hashimoto et al. (2012), we manually classified templates4 to each polarity type and obtained 8,464 excitatory templates, such as [が, 増える] ([subject, increase]) and [が, 向上する] ([subject, improve]), 2,262 inhibitory templates, such as [を, 防ぐ] ([object, prevent]) and [が, 死ぬ] ([subject, die]), and 7,230 neutral templates such as [を, 考える] ([object, consider]). With these templates, we obtain activation/suppression configurations (including neutral) for the nouns in the causal relations in the answer candidates and ques3 Because the activation/suppression configurations of nouns come from an excitation polarity of templates, “[argument position,predicate],” the semantics of verbs in the templates are implicitly considered in this consistency checking. 4 Varga et al. (2013) has used the same templates as ours, except they restricted their excitation/inhibitory templates to those whose polarity is consistent with that given by the automatic acquisition method of Hashimoto et al. (2012). 1738 tions. Next, we assume that a causal relation is appropriate as an answer to a question if the effect part of the causal relation and the question share at least one common noun with the same polarity. More detailed information concerning the configurations of all the nouns in all the candidates of an appropriate causal relation (including their cause parts) and the question are encoded into our feature set ef1–ef4 in Table 4 and the final judgment is done by our re-ranker. For generating ef1 and ef2, we classified all the nouns coupled with activation/suppression/neutral polarities in a causal relation into three types: SAME (the question contains the same noun with the same polarity), DiffPOL (the question contains the same noun with different polarity), and OTHER (the others). ef1 indicates whether each type of noun-polarity pair exists in a causal relation. Note that the types for the effect and cause parts are represented in distinct features. ef2 is the same as ef1 except that the types are augmented with the word classes of the corresponding nouns. In other words, ef2 indicates whether each type of noun-polarity pair exists in the causal relation for each word class. ef3 indicates the existence of candidates of an appropriate causal relation identified by this matching scheme, and ef4 represents the number of noun-polarity pairs shared by the question and the candidates of an appropriate causal relations (one, two, and more than two). 5 Experiments We experimented with causal relation recognition and why-QA with our causal relation features. 5.1 Data Set for Why-Question Answering For our experiments, we used the same why-QA data set as the one used in our previous work (Oh et al., 2012). This why-QA data set is composed of 850 Japanese why-questions and their top-20 answer candidates obtained by answer candidate extraction from 600 million Japanese web pages. Three annotators checked the top-20 answer candidates of these 850 questions and the final judgment was made by their majority vote. Their interrater agreement by Fleiss’ kappa reported in Oh et al. (2012) was substantial (κ = 0.634). Among the 850 questions, 250 why-questions were extracted from the Japanese version of Yahoo! Answers, and another 250 were created by annotators. In our previous work, we evaluated the system with these 500 questions and their answer candidates as training and test data in 10-fold cross-validation. The other 350 why-questions were manually built from passages describing the causes or reasons of events/phenomena. These questions and their answer candidates were used as additional training data for testing subsamples in each fold during the 10-fold cross-validation. In our why-QA experiments, we evaluated our why-QA system with the same settings. 5.2 Data Set for Causal Relation Recognition We built a data set composed of manually annotated causal relations for evaluating our causal relation recognition. As source data for this data set, we used the same 10-fold data that we used for evaluating our why-QA (500 questions and their answer candidates). We extracted the causal relation candidates from the answer candidates in each fold, and then our annotator (not an author) manually marked the span of the cause and effect parts of a causal relation for each causal relation candidate, keeping in mind that the causal relation must be expressed in terms of a c-marker in a given causal relation candidate. Finally, we had a data set made of 16,051 causal relation candidates, 8,117 of which had a true causal relation; the number of intra- and inter-sentential causal relations were 7,120 and 997, respectively. Note that this data set can be partitioned into ten folds by using the 10-fold partition of its source data. We performed 10-fold cross validation to evaluate our causal relation recognition with this 10-fold data. 5.3 Causal Relation Recognition We used CRF++5 for training our causal relation recognizer. In our evaluation, we judged a system’s output as correct if both spans of the cause and effect parts overlapped those in the gold standard. Evaluation was done by precision, recall, and F1. Precision Recall F1 BASELINE 41.9 61.0 49.7 INTRA-SENT 84.5 75.4 79.7 INTER-SENT 80.2 52.6 63.6 ALL 83.8 71.1 77.0 Table 5: Results of causal relation recognition (%) Table 5 shows the result. BASELINE represents 5 http://code.google.com/p/crfpp/ 1739 the result for our baseline system that recognizes a causal relation by simply taking the two phrases adjacent to a c-marker (i.e., before and after) as cause and effect parts of the causal relation. We assumed that the system had an oracle for judging correctly whether each phrase is a cause part or an effect part. In other words, we judged that a causal relation recognized by BASELINE is correct if both cause and effect parts in the gold standard are adjacent to a c-marker. INTRA-SENT and INTER-SENT represent the results for intra- and inter-sentential causal relations and ALL represents the result for the both causal relations by our method. From these results, we confirmed that our method recognized both intra- and inter-sentential causal relations with over 80% precision, and it significantly outperformed our baseline system in both precision and recall rates. Precision Recall F1 ALL-“MORPH” 80.8 66.4 72.9 ALL-“SYNTACTIC” 82.9 67.0 74.1 ALL-“C-MARKER” 76.3 51.4 61.4 ALL 83.8 71.1 77.0 Table 6: Ablation test results for causal relation recognition (%) We also investigated the contribution of the three types of features used in our causal relation recognition to the performance. We evaluated the performance when we removed one of the three types of features (ALL-“MORPH”, ALL“SYNTACTIC” and ALL-“C-MARKER”) and compared the results in these settings with the one when all the feature sets were used (ALL). Table 6 shows the result. We confirmed that all the feature sets improved the performance, and we got the best performance when using all of them. We used the causal relations obtained from the 10-fold cross validation for our why-QA experiments. 5.4 Why-Question Answering We performed why-QA experiments to confirm the effectiveness of intra- and inter-sentential causal relations in a why-QA task. In this experiment, we compared five systems: four baseline systems (MURATA, OURCF, OH and OH+PREVCF) and our proposed method (PROPOSED). MURATA corresponds to our answer candidate extraction. OURCF uses a re-ranker trained with only our causal relation features. OH, which represents our previous work (Oh et al., 2012), has a re-ranker trained with morphosyntactic, semantic word class, and sentiment polarity features. OH+PREVCF is a system with a re-ranker trained with the features used in OH and with the causal relation feature proposed in Higashinaka and Isozaki (2008). The causal relation feature includes an indicator that determines whether the causal relations between two terms appear in a question-answer pair; cause in an answer and its effect in a question. We acquired the causal relation instances (between terms) from 600 million Japanese web pages using the method of De Saeger et al. (2009) and exploited the top-100,000 causal relation instances in this system. PROPOSED has a re-ranker trained with our causal relation features as well as the three types of features proposed in Oh et al. (2012). Comparison between OH and PROPOSED reveals the contribution of our causal relation features to why-QA. We used TinySVM6 with a linear kernel for training the re-rankers in OURCF, OH, OH+PREVCF and PROPOSED. Evaluation was done by P@1 (Precision of the top-answer) and Mean Average Precision (MAP); they are the same measures used in Oh et al. (2012). P@1 measures how many questions have a correct top-answer candidate. MAP measures the overall quality of the top-20 answer candidates. As mentioned in Section 5.1, we used 10-fold cross-validation with the same setting as the one used in Oh et al. (2012) for our experiments. P@1 MAP MURATA 22.2 27.0 OURCF 27.8 31.4 OH 37.4 39.1 OH+PREVCF 37.4 38.9 PROPOSED 41.8 41.0 Table 7: Why-QA results (%) Table 7 shows the evaluation results. Our proposed method outperformed the other four systems and improved P@1 by 4.4% over OH, which is the-state-of-the-art system for Japanese why6 http://chasen.org/∼taku/software/TinySVM/ 1740 QA. OURCF showed the performance improvement over MURATA. Although this suggests the effectiveness of our causal relation features, the overall performance of OURCF was lower than that of OH. OH+PREVCF outperformed neither OH nor PROPOSED. This suggests that our approach is more effective than previous causalitybased approaches (Girju, 2003; Higashinaka and Isozaki, 2008), at least in our setting. 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Precision (%) % of questions PROPOSED OH OurCF Figure 4: Effect of causal relation features on the top-answers We also compared confident answers of OURCF, OH, and PROPOSED by making each system provide only the k confident top-answers (for k questions) selected by their SVM scores given by each system’s re-ranker. This reduces the number of questions that can be answered by a system, but the top-answers become more reliable as k decreases. Fig. 4 shows this result, where the x axis represents the percentage of questions (against all the questions in our test set) whose top-answers are given by each system, and the y axis represents the precision of the top-answers at a certain point on the x axis. When both systems provided top-answers for 25% of all the questions in our test set, our method achieved 83.2% precision, which is much higher than OH’s (62.4%). This experiment confirmed that our causal relation features were also effective in improving the quality of the highly confident answers. However, the high precision by our method was bound to confident answers for a small number of questions, and the difference in the precision between OH and PROPOSED in Fig. 4 became smaller as we considered more answers with lower confidence. We think that one of the reasons is the relatively small coverage of the excitation polarity lexicon, a core resource in our excitation polarity matching. We are planning to enlarge the lexicon to deal with this problem. Next, we investigated the contribution of the intra- and inter-sentential causal relations to the performance of our method. We used only one of the two types of causal relations for generating causal relation features (INTRA-SENT and INTERSENT) for training our re-ranker and compared the results in these settings with the one when both were used (ALL (PROPOSED)). Table 8 shows the result. Both intra- and inter-sentential causal relations contributed to the performance improvement. P@1 MAP INTER-SENT 39.0 39.7 INTRA-SENT 40.4 40.5 ALL (PROPOSED) 41.8 41.0 Table 8: Results with/without intra- and intersentential causal relations (%) We also investigated the contributions of the three types of causal relation features by ablation tests (Table 9). When we do not use the features by excitation polarity matching (ALL-{ef1– ef4}), the performance is the worst. This implies that the contribution of excitation polarity matching exceeds the other two. P@1 MAP ALL-{tf1–tf4} 40.8 40.7 ALL-{pf1–pf4} 41.0 40.9 ALL-{ef1–ef4} 39.6 40.5 ALL (PROPOSED) 41.8 41.0 Table 9: Ablation test results for why-QA (%) 6 Conclusion In this paper, we explored the utility of intra- and inter-sentential causal relations for ranking answer candidates to why-questions. We also proposed a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation. Through experiments, we confirmed that these ideas are effective for improving why-QA, and our proposed method achieved 41.8% P@1, which is 4.4% improvement over the current state-of-the-art system of Japanese why-QA. We also showed that our system achieved 83.2% precision for its confident answers, when it only provided its confident answers for 25% of all the questions in our test set. 1741 References Ion Androutsopoulos and Prodromos Malakasiotis. 2010. A survey of paraphrasing and textual entailment methods. Journal of Artificial Intelligence Research (JAIR), 38(1):135–187. Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa Dang, and Danilo Giampiccolo. 2011. The seventh pascal recognizing textual entailment challenge. In Proceedings of TAC. E. Blanco, N. Castell, and Dan I. Moldovan. 2008. Causal relation extraction. In Proceedings of LREC’08. Du-Seong Chang and Key-Sun Choi. 2006. Incremental cue phrase learning and bootstrapping method for causality extraction using cue phrase and word pair probabilities. Information Processing and Management, 42(3):662–678. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2010. Recognizing textual entailment: Rational, evaluation and approaches. Natural Language Engineering, 16(1):1–17. Stijn De Saeger, Kentaro Torisawa, Jun’ichi Kazama, Kow Kuroda, and Masaki Murata. 2009. Large scale relation acquisition using class dependent patterns. In Proceedings of ICDM ’09, pages 764–769. Stijn De Saeger, Kentaro Torisawa, Masaaki Tsuchida, Jun’ichi Kazama, Chikara Hashimoto, Ichiro Yamada, Jong Hoon Oh, István Varga, and Yulan Yan. 2011. Relation acquisition using word classes and partial patterns. In Proceedings of EMNLP ’11, pages 825–835. Quang Xuan Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proceedings of EMNLP ’11, pages 294–303. David A. Ferrucci, Eric W. Brown, Jennifer ChuCarroll, James Fan, David Gondek, Aditya Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John M. Prager, Nico Schlaefer, and Christopher A. Welty. 2010. Building Watson: An overview of the DeepQA project. AI Magazine, 31(3):59–79. Roxana Girju. 2003. Automatic detection of causal relations for question answering. In Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering, pages 76–83. Chikara Hashimoto, Kentaro Torisawa, Stijn De Saeger, Jong-Hoon Oh, and Jun’ichi Kazama. 2012. Excitatory or inhibitory: A new semantic orientation extracts contradiction and causality from the web. In Proceedings of EMNLP-CoNLL ’12. Ryuichiro Higashinaka and Hideki Isozaki. 2008. Corpus-based question answering for whyquestions. In Proceedings of IJCNLP ’08, pages 418–425. Takashi Inui and Manabu Okumura. 2005. Investigating the characteristics of causal relations in Japanese text. In In Annual Meeting of the Association for Computational Linguistics (ACL) Workshop on Frontiers in Corpus Annotations II: Pie in the Sky. Jun’ichi Kazama and Kentaro Torisawa. 2008. Inducing gazetteers for named entity recognition by large-scale clustering of dependency relations. In Proceedings of ACL-08: HLT, pages 407–415. Christopher S. G. Khoo, Syin Chan, and Yun Niu. 2000. Extracting causal knowledge from a medical database using graphical patterns. In Proceedings of ACL ’00, pages 336–343. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML ’01, pages 282–289. Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. 2007. Exploiting syntactic and shallow semantic kernels for question answer classification. In Proceedings of ACL ’07, pages 776–783. Masaki Murata, Sachiyo Tsukawaki, Toshiyuki Kanamaru, Qing Ma, and Hitoshi Isahara. 2007. A system for answering non-factoid Japanese questions by using passage retrieval weighted based on type of answer. In Proceedings of NTCIR-6. Srini Narayanan and Sanda Harabagiu. 2004. Question answering based on semantic structures. In Proceedings of COLING ’04, pages 693–701. Jong-Hoon Oh, Kentaro Torisawa, Chikara Hashimoto, Takuya Kawada, Stijn De Saeger, Jun’ichi Kazama, and Yiou Wang. 2012. Why question answering using sentiment analysis and word classes. In Proceedings of EMNLP-CoNLL ’12, pages 368–378. Kira Radinsky, Sagie Davidovich, and Shaul Markovitch. 2012. Learning causality for news events prediction. In Proceedings of WWW ’12, pages 909–918. Mehwish Riaz and Roxana Girju. 2010. Another look at causality: Discovering scenario-specific contingency relationships with no supervision. In ICSC ’10, pages 361–368. Hideki Shima, Hiroshi Kanayama, Cheng wei Lee, Chuan jie Lin, Teruko Mitamura, Yusuke Miyao, Shuming Shi, and Koichi Takeda. 2011. Overview of NTCIR-9 RITE: Recognizing Inference in TExt. In Proceedings of NTCIR-9. Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to nonfactoid questions from web collections. Computational Linguistics, 37(2):351–383. 1742 Kentaro Torisawa. 2006. Acquiring inference rules with temporal constraints by using japanese coordinated sentences and noun-verb co-occurrences. In Proceedings of HLT-NAACL ’06, pages 57–64. Vladimir N. Vapnik. 1995. The nature of statistical learning theory. Springer-Verlag New York, Inc., New York, NY, USA. Istvan Varga, Motoki Sano, Kentaro Torisawa, Chikara Hashimoto, Kiyonori Ohtake, Takao Kawai, JongHoon Oh, and Stijn De Saeger. 2013. Aid is out there: Looking for help from tweets during a large scale disaster. In Proceedings of ACL ’13. Suzan Verberne, Lou Boves, Nelleke Oostdijk, and Peter-Arno Coppen. 2008. Using syntactic information for improving why-question answering. In Proceedings of COLING ’08, pages 953–960. Suzan Verberne, Lou Boves, and Wessel Kraaij. 2011. Bringing why-qa to web search. In Proceedings of ECIR ’11, pages 491–496. 1743
2013
170
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1744–1753, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Question Answering Using Enhanced Lexical Semantic Models Wen-tau Yih Ming-Wei Chang Christopher Meek Andrzej Pastusiak Microsoft Research Redmond, WA 98052, USA {scottyih,minchang,meek,andrzejp}@microsoft.com Abstract In this paper, we study the answer sentence selection problem for question answering. Unlike previous work, which primarily leverages syntactic analysis through dependency tree matching, we focus on improving the performance using models of lexical semantic resources. Experiments show that our systems can be consistently and significantly improved with rich lexical semantic information, regardless of the choice of learning algorithms. When evaluated on a benchmark dataset, the MAP and MRR scores are increased by 8 to 10 points, compared to one of our baseline systems using only surface-form matching. Moreover, our best system also outperforms pervious work that makes use of the dependency tree structure by a wide margin. 1 Introduction Open-domain question answering (QA), which fulfills a user’s information need by outputting direct answers to natural language queries, is a challenging but important problem (Etzioni, 2011). State-of-the-art QA systems often implement a complicated pipeline architecture, consisting of question analysis, document or passage retrieval, answer selection and verification (Ferrucci, 2012; Moldovan et al., 2003). In this paper, we focus on one of the key subtasks – answer sentence selection. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. For instance, although both of the following sentences contain the answer “Jack Lemmon” to the question “Who won the best actor Oscar in 1973?” only the first sentence is correct. A1: Jack Lemmon won the Academy Award for Best Actor for Save the Tiger (1973). A2: Oscar winner Kevin Spacey said that Jack Lemmon is remembered as always making time for other people. One of the benefits of answer sentence selection is that the output can be provided directly to the user. Instead of outputting only the answer, returning the whole sentence often adds more value as the user can easily verify the correctness without reading a lengthy document. Answer sentence selection can be naturally reduced to a semantic text matching problem. Conceptually, we would like to measure how close the question and sentence can be matched semantically. Due to the variety of word choices and inherent ambiguities in natural languages, bag-ofwords approaches with simple surface-form word matching tend to produce brittle results with poor prediction accuracy (Bilotti et al., 2007). As a result, researchers put more emphasis on exploiting both the syntactic and semantic structure in questions/sentences. Representative examples include methods based on deeper semantic analysis (Shen and Lapata, 2007; Moldovan et al., 2007) and on tree edit-distance (Punyakanok et al., 2004; Heilman and Smith, 2010) and quasisynchronous grammar (Wang et al., 2007) that match the dependency parse trees of questions and sentences. However, such approaches often require more computational resources. In addition to applying a syntactic or semantic parser during run-time, finding the best matching between structured representations of sentences is not trivial. For example, the computational complexity of tree matching is O(V 2L4), where V is the number of nodes and L is the maximum depth (Tai, 1979). Instead of focusing on the high-level semantic representation, we turn our attention in this work to improving the shallow semantic compo1744 nent, lexical semantics. We formulate answer selection as a semantic matching problem with a latent word-alignment structure as in (Chang et al., 2010) and conduct a series of experimental studies on leveraging recently proposed lexical semantic models. Our main contributions in this work are two key findings. First, by incorporating the abundant information from a variety of lexical semantic models, the answer selection system can be enhanced substantially, regardless of the choice of learning algorithms and settings. Compared to the previous work, our latent alignment model improves the result on a benchmark dataset by a wide margin – the mean average precision (MAP) and mean reciprocal rank (MRR) scores are increased by 25.6% and 18.8%, respectively. Second, while the latent alignment model performs better than unstructured models, the difference diminishes after adding the enhanced lexical semantics information. This may suggest that compared to introducing complex structured constraints, incorporating shallow semantic information is both more effective and computationally inexpensive in improving the performance, at least for the specific word alignment model tested in this work. The rest of the paper is structured as follows. We first survey the related work in Sec. 2. Sec. 3 defines the problem of answer sentence selection, along with the high-level description of our solution. The enhanced lexical semantic models and the learning frameworks we explore are presented in Sec. 4 and Sec. 5, respectively. Our experimental results on a benchmark QA dataset is shown in Sec. 6. Finally, Sec. 7 concludes the paper. 2 Related Work While the task of question answering has a long history dated back to the dawn of artificial intelligence, early systems like STUDENT (Winograd, 1977) and LUNAR (Woods, 1973) are typically designed to demonstrate natural language understanding for a small and specific domain. The Text REtrieval Conference (TREC) Question Answering Track was arguably the first largescale evaluation of open-domain question answering (Voorhees and Tice, 2000). The task is designed in an information retrieval oriented setting. Given a factoid question along with a collection of documents, a system is required to return the exact answer, along with the document that supports the answer. In contrast, the Jeopardy! TV quiz show provides another open-domain question answering setting, in which IBM’s Watson system famously beat the two highest ranked players (Ferrucci, 2012). Questions in this game are presented in a statement form and the system needs to identify the true question and to give the exact answer. A short sentence or paragraph to justify the answer is not required in either TREC-QA or Jeopardy! As any QA system can virtually be decomposed into two major high-level components, retrieval and selection (Echihabi and Marcu, 2003), the answer selection problem is clearly critical. Limiting the scope of an answer to a sentence is first highlighted by Wang et al. (2007), who argued that it was more informative to present the whole sentence instead of a short answer to users. Observing the limitations of the bag-of-words models, Wang et al. (2007) proposed a syntaxdriven approach, where each pair of question and sentence are matched by their dependency trees. The mapping is learned by a generative probabilistic model based on a Quasi-synchronous Grammar formulation (Smith and Eisner, 2006). This approach was later improved by Wang and Manning (2010) with a tree-edit CRF model that learns the latent alignment structure. In contrast, general tree matching methods based on tree-edit distance have been first proposed by Punyakanok et al. (2004) for a similar answer selection task. Heilman and Smith (2010) proposed a discriminative approach that first computes a tree kernel function between the dependency trees of the question and candidate sentence, and then learns a classifier based on the tree-edit features extracted. Although lexical semantic information derived from WordNet has been used in some of these approaches, the research has mainly focused on modeling the mapping between the syntactic structures of questions and sentences, produced from syntactic analysis. The potential improvement from enhanced lexical semantic models seems to have been deliberately overlooked.1 3 Problem Definition We consider the answer selection problem in a supervised learning setting. For a set of questions {q1, · · · , qm}, each question qi is associated with a list of labeled candidate answer sentences 1For example, Heilman and Smith (2010) emphasized that “The tree edit model, which does not use lexical semantics knowledge, produced the best result reported to date.” 1745 What is the fastest car in the world? The Jaguar XJ220 is the dearest, fastest and most sought after car on the planet. Figure 1: An example pair of question and answer sentence, adapted from (Harabagiu and Moldovan, 2001). Words connected by solid lines are clear synonyms or hyponym/hypernym; words with weaker semantic association are linked by dashed lines. {(yi1, si1), (yi1, si2), · · · , (yin, sin)}, where yij = 1 indicates that sentence sij is a correct answer to question qi, and 0 otherwise. Using this labeled data, our goal is to learn a probabilistic classifier to predict the label of a new, unseen pair of question and sentence. Fundamentally, what the classifier predicts is whether the sentence “matches” the question semantically. In other words, does s have the answer that satisfies the semantic constraints provided in the question? Without representing the question and sentence in logic or syntactic trees, we take a word-alignment view for solving this problem. We assume that there is an underlying structure h that describes how q and s can be associated through the relations of the words in them. Figure 1 illustrates this setting using a revised example from (Harabagiu and Moldovan, 2001). In this figure, words connected by solid lines are clear synonyms or hyponym/hypernym; words connected by dashed lines indicate that they are weakly related. With this alignment structure, features like the degree of mapping or whether all the content words in the question can be mapped to some words in the sentence can be extracted and help improve the classifier. Notice that the structure representation in terms of word-alignment is fairly general. For instance, if we assume a naive complete bipartite matching, then effectively it reduces to the simple bag-of-words model. Typically, the “ideal” alignment structure is not available in the data, and previous work exploited mostly syntactic analysis (e.g., dependency trees) to reveal the latent mapping structure. In this work, we focus our study on leveraging the lowlevel semantic cues from recently proposed lexical semantic models. As will be shown in our experiments, such information not only improves a latent structure learning method, but also makes a simple bipartite matching approach extremely strong.2 4 Lexical Semantic Models In this section, we introduce the lexical semantic models we adopt for solving the semantic matching problem in answer selection. To go beyond the simple, limited surface-form matching, we aim to pair words that are semantically related, specifically measured by models of word relations including synonymy/antonymy, hypernymy/hyponymy (the Is-A relation) and general semantic word similarity. 4.1 Synonymy and Antonymy Among all the word relations, synonymy is perhaps the most basic one and needs to be handled reliably. Although sets of synonyms can be easily found in thesauri or WordNet synsets, such resources typically cover only strict synonyms. When comparing two words, it is more useful to estimate the degree of synonymy as well. For instance, ship and boat are not strict synonyms because a ship is usually viewed as a large boat. Knowing that two words are somewhat synonymous could be valuable in determining whether they should be mapped. In order to estimate the degree of synonymy, we leverage a recently proposed polarity-inducing latent semantic analysis (PILSA) model (Yih et al., 2012). Given a thesaurus, the model first constructs a signed d-by-n co-occurrence matrix W, where d is the number of word groups and n is the size of the vocabulary. Each row consists of a 2Proposed by an anonymous reviewer, one justification of this word-alignment approach, where syntactic analysis plays a less important role, is that there are often few sensible combinations of words. For instance, knowing only the set of words {”car”, ”fastest”, ”world”}, one may still guess correctly the question “What is the fastest car in the world?” 1746 group of synonyms and antonyms of a particular sense and each column represents a unique word. Values of the elements in each row vector are the TFIDF values of the corresponding words in this group. The notion of polarity is then induced by making the values of words in the antonym groups negative, and the matrix is generalized by a lowrank approximation derived by singular-value decomposition (SVD) in the end. This design has an intriguing property – if the cosine score of two column vectors are positive, then the two corresponding words tend to be synonymous; if it’s negative, then the two words are antonymous. The degree is measured by the absolute value. Following the setting described in (Yih et al., 2012), we construct a PILSA model based on the Encarta thesaurus and enhance it with a discriminative projection matrix training method. The estimated degrees of both synonymy and antonymy are used our experiments.3 4.2 Hypernymy and Hyponymy The Class-Inclusion or Is-A relation is commonly observed between words in questions and answer sentences. For example, to correctly answer the question “What color is Saturn?”, it is crucial that the selected sentence mentions a specific kind of color, as in “Saturn is a giant gas planet with brown and beige clouds.” Another example is “Who wrote Moonlight Sonata?”, where compose in “Ludwig van Beethoven composed the Moonlight Sonata in 1801.” is one kind of write. Traditionally, WordNet taxonomy is the linguistic resource for identifying hypernyms and hyponyms, applied broadly to many NLP problems. However, WordNet has a number of well-known limitations including its rather limited or skewed concept distribution and the lack of the coverage of the Is-A relation (Song et al., 2011). For instance, when a word refers to a named entity, the particular sense and meaning is often not encoded. As a result, relations such as “Apple” is-a “company” and “Jaguar” is-a “car” cannot be found in WordNet. Similar to the case in synonymy, the Is-A relation defined in WordNet does not provide a native, real-valued degree of the relation, which can only be roughly approximated using the number of links on the taxonomy path connecting two 3Mapping two antonyms may be desired if one of them is in the scope of negation (Morante and Blanco, 2012; Blanco and Moldovan, 2011). However, we do not attempt to resolve the negation scope in this work. concepts (Resnik, 1995). In order to remedy these issues, we augment WordNet with the Is-A relations found in Probase (Wu et al., 2012). Probase is a knowledge base that establishes connections between 2.7 million concepts, discovered automatically by applying Hearst patterns (Hearst, 1992) to 1.68 billion Web pages. Its abundant concept coverage distinguishes it from other knowledge bases, such as Freebase (Bollacker et al., 2008) and WikiTaxonomy (Ponzetto and Strube, 2007). Based on the frequency of term co-occurrences, each Is-A relation from Probase is associated with a probability value, indicating the degree of the relation. We verified the quality of Probase Is-A relations using a recently proposed SemEval task of relational similarity (Jurgens et al., 2012) in a companion paper (Zhila et al., 2013), where a subset of the data is to measure the degree of two words having a class-inclusion relation. Probase’s prediction correlates well with the human annotations and achieves a high Spearman’s rank correlation coefficient score, ρ = 0.619. In comparison, the previous best system (Rink and Harabagiu, 2012) in the task only reaches ρ = 0.233. These appealing qualities make Probase a robust lexical semantic model for hypernymy/hyponymy. 4.3 Semantic Word Similarity The third lexical semantic model we introduce targets a general notion of word similarity. Unlike synonymy and hyponymy, word similarity is only loosely defined when two words can be associated by some implicit relation.4 The general word similarity model can be viewed as a “back-off” solution when the exact lexical relation (e.g., partwhole and attribute) is not available or cannot be accurately detected. Among various word similarity models (Agirre et al., 2009; Reisinger and Mooney, 2010; Gabrilovich and Markovitch, 2007; Radinsky et al., 2011), the vector space models (VSMs) based on the idea of distributional similarity (Turney and Pantel, 2010) are often used as the core component. Inspired by (Yih and Qazvinian, 2012), which argues the importance of incorporating heterogeneous vector space models for measuring word similarity, we leverage three different VSMs in this work: Wiki term-vectors, recurrent neural 4Instead of making the distinction, word similarity here refers to the larger set of relations commonly covered by word relatedness (Budanitsky and Hirst, 2006). 1747 network language model (RNNLM) and a concept vector space model learned from click-through data. Semantic word similarity is estimated using the cosine score of the corresponding word vectors in these VSMs. Contextual term-vectors created using the Wikipedia corpus have shown to perform well on measuring word similarity (Reisinger and Mooney, 2010). Following the setting suggested by Yih and Qazvinian (2012), we create termvectors representing about 1 million words by aggregating terms within a window of [−10, 10] of each occurrence of the target word. The vectors are further refined by applying the same vocabulary and feature pruning techniques. A recurrent neural network language model (Mikolov et al., 2010) aims to estimate the probability of observing a word given its preceding context. However, one by-product of this model is the word embedding learned in its hidden-layer, which can be viewed as capturing the word meaning in some latent, conceptual space. As a result, vectors of related words tend to be close to each other. For this word similarity model, we take a 640-dimensional version of RNNLM vectors, which is trained using the Broadcast News corpus of 320M words.5 The final word relatedness model is a projection model learned from the click-through data of a commercial search engine (Gao et al., 2011). Unlike the previous two models, which are created or trained using a text corpus, the input for this model is pairs of aggregated queries and titles of pages users click. This parallel data is used to train a projection matrix for creating the mapping between words in queries and documents based on user feedback, using a Siamese neural network (Yih et al., 2011). Each row vector of this matrix is the dense vector representation of the corresponding word in the vocabulary. Perhaps due to its unique information source, we found this particular word embedding seems to complement the other two VSMs and tends to improve the word similarity measure in general. 5 Learning QA Matching Models In this section, we investigate the effectiveness of various learning models for matching questions and sentences, including the bag-of-words setting 5http://www.fit.vutbr.cz/˜imikolov/ rnnlm/ and the framework of learning latent structures. 5.1 Bag-of-Words Model The bag-of-words model treats each question and sentence as an unstructured bag of words. When comparing a question with a sentence, the model first matches each word in the question to each word in the sentence. It then aggregates features extracted from each of these word pairs to represent the whole question/sentence pair. A binary classifier can be trained easily using any machine learning algorithm in this standard supervised learning setting. Formally, let x = (q, s) be a pair of question q and sentence s. Let Vq = {wq1, wq2, · · · , wqm} and Vs = {ws1, ws2, · · · , wsn} be the sets of words in q and s, respectively. Given a word pair (wq, ws), where wq ∈Vq and ws ∈Vs, feature functions φ1, · · · , φd map it to a d-dimensional real-valued feature vector. We consider two aggregate functions for defining the feature vectors of the whole question/answer pair: average and max. Φavgj(q, s) = 1 mn X wq∈Vq ws∈Vs φj(wq, ws) (1) Φmaxj(q, s) = max wq∈Vq ws∈Vs φj(wq, ws) (2) Together, each question/sentence pair is represented by a 2d-dimensional feature vector. We tested two learning algorithms in this setting: logistic regression and boosted decision trees (Friedman, 2001). The former is the loglinear model widely used in the NLP community and the latter is a robust non-linear learning algorithm that has shown great empirical performance. The bag-of-words model does not require an additional inference stage as in structured learning, which may be computationally expensive. Nevertheless, its lack of structure information could limit the expressiveness of the model and make it difficult to capture more sophisticated semantics in the sentences. To address this concern, we investigate models of learning latent structures next. 5.2 Learning Latent Structures One obvious issue of the bag-of-words model is that words in the unrelated part of the sentence may still be paired with words in the question, which introduces noise to the final feature vector. 1748 This is observed in many question/sentence pairs, such as the one below. Q: Which was the first movie that James Dean was in? A: James Dean, who began as an actor on TV dramas, didn’t make his screen debut until 1951’s “Fixed Bayonet.” While this sentence correctly answers the question, the fact that James Dean began as a TV actor is unrelated to the question. As a result, an “ideal” word alignment structure should not link words in this clause to those in the question. In order to leverage the latent structured information, we adapt a recently proposed framework of learning constrained latent representations (LCLR) (Chang et al., 2010). LCLR can be viewed as a variant of Latent-SVM (Felzenszwalb et al., 2009) with different learning formulations and a general inference framework. The idea of LCLR is to replace the decision function of a standard linear model θT φ(x) with arg max h θT φ(x, h), (3) where θ represents the weight vector and h represents the latent variables. In this answer selection task, x = (q, s) represents a pair of question q and candidate sentence s. As described in Sec. 3, h refers to the latent alignment between q and s. The intuition behinds Eq. (3) is: candidate sentence s correctly answers question q if and only if the decision can be supported by the best alignment h. The objective function of LCLR is defined as: minθ 1 2||θ||2 + C X i ξ2 i s.t. ξi ≥1 −yi max h θT φ(x, h) Note that the alignment is latent, so LCLR uses the binary labels in the training data as feedback to find the alignment for each example. The computational difficulty of the inference problem (Eq. (3)) largely depends on the constraints we enforce in the alignment. Complicated constraints may result in a difficult inference problem, which can be solved by integer linear programming (Roth and Yih, 2007). In this work, we considered several sets of constraints for the alignment task, including a two-layer phrase/word alignment structure, but found that they generally performed the same. Therefore, we chose the many-to-one alignment6, where inference can be solved exactly using a simple greedy algorithm. 6 Experiments We present our experimental results in this section by first introducing the data and evaluation metrics, followed by the results of existing systems and some baseline methods. We then show the positive impact of adding information of word relations from various lexical semantics models, with some discussion on the limitation of the word-matching approach. 6.1 Data & Evaluation Metrics The answer selection dataset we used was originally created by Wang et al. (2007) based on the QA track of past Text REtrieval Conferences (TREC-QA). Questions in this dataset are short factoid questions, such as “What is Crips’ gang color?” In average, each question is associated with approximately 33 answer candidate sentences. A pair of question and sentence is judged positive if the sentence contains the exact answer key and can provide sufficient context as supporting evidence. The training set of the data contains manually labeled 5,919 question/sentence pairs from TREC 8-12. The development and testing sets are both from TREC 13, which contain 1,374 and 1,866 pairs, respectively. The task is treated as a sentence ranking problem for each question and thus evaluated in Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR), using the official TREC evaluation program. Following (Wang et al., 2007), candidate sentences with more than 40 words are removed from evaluation, as well as questions with only positive or negative candidate sentences. 6.2 Baseline Methods Several systems have been proposed and tested using this dataset. Wang et al. (2007) presented a generative probabilistic model based on a Quasi-synchronous Grammar formulation and was later improved by Wang and Manning (2010) with a tree-edit CRF model that learns the latent alignment structure. In contrast, Heilman and 6Each word in the question needs to be linked to a word in the sentence. Each word in the sentence can be linked to zero or multiple words in the question. 1749 System MAP MRR Wang et al. (2007) 0.6029 0.6852 Heilman and Smith (2010) 0.6091 0.6917 Wang and Manning (2010) 0.5951 0.6951 Table 1: Test set results of existing methods, taken from Table 3 of (Wang and Manning, 2010). Dev Test Baseline MAP MRR MAP MRR Random 0.5243 0.5816 0.4708 0.5286 Word Cnt 0.6516 0.7216 0.6263 0.6822 Wgt Word Cnt 0.7112 0.7880 0.6531 0.7071 Table 2: Results of three baseline methods. Smith (2010) proposed a discriminative approach that first computes a tree kernel function between the dependency trees of the question and candidate sentence, and then learns a classifier based on the tree-edit features extracted. Table 1 summarizes their results on the test set. All these systems incorporated lexical semantics features derived from WordNet and named entity features. In order to further estimate the difficulty of this task and dataset, we tested three simple baselines. The first is random scoring, which simply assigns a random score to each candidate sentence. The second one, word count, is to count how many words in the question that also occur in the answer sentence, after removing stopwords7, and lowering the case. Finally, the last baseline method, weighted word count, is basically the same as identical word matching, but the count is re-weighted using the IDF value of the question word. This is similar to the BM25 ranking function (Robertson et al., 1995). The results of these three methods are shown in Table 1. Somewhat surprisingly, we find that word count is fairly strong and performs comparably to previous systems.8 In addition, weighting the question words with their IDF values further improves the results. 6.3 Incorporating Rich Lexical Semantics We test the effectiveness of adding rich lexical semantics information by creating examples of different feature sets. As described in Sec. 5, 7We used a list of 101 stopwords, including articles, pronouns and punctuation. 8The finding has been confirmed by the lead author of (Wang et al., 2007). all the features are based on the properties of the pair of a word from the question and a word from the candidate sentence. Stopwords are first removed from both questions and sentences and all words are lower-cased. Features used in the experiments can be categorized into six types: identical word matching (I), lemma matching (L), WordNet (WN), enhanced Lexical Semantics (LS), Named Entity matching (NE) and Answer type checking (Ans). Inspired by the weighted word count baseline, all features except (Ans) are weighted by the IDF value of the question word. In other words, the IDF values help decide the importance of word pairs to the model. Staring from the our baseline model, weighted word count, the identical word matching (I) feature checks whether the pair of words are the same. Instead of checking the surface form of the word, lemma matching (L) verifies whether the two words have the same lemma form. Arguably the most common source of word relations, WordNet (WN) provides the primitive features of whether two words could belong to the same synset in WordNet, could be antonyms and whether one is a hypernym of the other. Alternatively, the enhanced lexical semantics (LS) features apply the models described in Sec. 4 to the word pair and use their estimated degree of synonymy, antonymy, hyponymy and semantic relatedness as features. Named entity matching (NE) checks whether two words are individually part of some named entities with the same type. Finally, when the question word is the WH-word, we check if the paired word belongs to some phrase that has the correct answer type using simple rules, such as “Who should link to a word that is part of a named entity of type Person.” We created examples in each round of experiments by augmenting these features in the same order, and observed how adding different information helped improve the model performance. Three models are included in our study. For the unstructured, bag-of-words setting, we tested logistic regression (LR) and boosted decision trees (BDT). As mentioned in Sec. 5, the features for the whole question/sentence pair are the average and max of features of all the word pairs. For the structured-output setting, we used the framework of learning constrained latent representation (LCLR) and required that each question word needed to be mapped to a word in the sentence. 1750 LR BDT LCLR Feature set MAP MRR MAP MRR MAP MRR 1: I 0.6531 0.7071 0.6323 0.6898 0.6629 0.7279 2: I+L 0.6744 0.7223 0.6496 0.6923 0.6815 0.7270 3: I+L+WN 0.7039 0.7705 0.6798 0.7450 0.7316 0.7921 4: I+L+WN+LS 0.7339 0.8107 0.7523 0.8455 0.7626 0.8231 5: All 0.7374 0.8171 0.7495 0.8450 0.7648 0.8255 Table 3: Test results of various models and feature groups. Logistic regression (LR) and boosted decision trees (BDT) are the two unstructured models. LCLR is the algorithm for learning latent structures. Feature groups are identical word matching (I), lemma matching (L), WordNet (WN) and enhanced Lexical Semantics (LS). All includes these four plus Named Entity matching (NE) and Answer type checking (Ans). Hyper-parameters are selected using the ones that achieve the best MAP score on the development set. Results of these models and feature sets are presented in Table 3. We make two observations from the results. First, while incorporating more information of the word pairs in general helps, it is clear that mapping words beyond surface-form matching with the help of WordNet (Line #3 vs. #2) is important. Moreover, when richer information from other lexical semantic models is available, the performance can be further improved (Line #4 vs. #3). Overall, by simply incorporating more information on word relations, we gain approximately 10 points in both MAP and MRR compared to surface-form matching (Line #4 vs. #2), consistently across all three models. However, adding more information like named entity matching and answer type verification does not seem to help much (Line #5 vs. #4). Second, while the structured-output model usually performs better than both unstructured models (LCLR vs. LR & BDT), the performance gain diminishes after more information of word pairs is available (e.g., Lines #4 and #5). 6.4 Limitation of Word Matching Models Although we have demonstrated the benefits of leveraging various lexical semantic models to help find the association between words, the problem of question answering is nevertheless far from solved using the word-based approach. Examining the output of the LCLR model with all features on the development set, we found that there were three main sources of errors, including uncovered or inaccurate entity relations, the lack of robust question analysis and the need of high-level semantic representation and inference. While the first two can be improved by, say, using a better named entity tagger, incorporating other knowledge bases and building a question classifier, how to solve the third problem is tricky. Below is an example: Q: In what film is Gordon Gekko the main character? A: He received a best actor Oscar in 1987 for his role as Gordon Gekko in “Wall Street”. This is a correct answer sentence because “winning a best actor Oscar” implies that the role Gordon Gekko is the main character. It is hard to believe that a pure word-matching model would be able to solve this type of “inferential question answering” problem. 7 Conclusions In this paper, we present an experimental study on solving the answer selection problem using enhanced lexical semantic models. Following the word-alignment paradigm, we find that the rich lexical semantic information improves the models consistently in the unstructured bag-of-words setting and also in the framework of learning latent structures. Another interesting finding we have is that while the latent structured model, LCLR, performs better than the other two unstructured models, the difference diminishes after more information, including the enhanced lexical semantic knowledge and answer type verification, has been incorporated. This may suggest that adding shallow semantic information is more effective than introducing complex structured constraints, at least for the specific word alignment model we experimented with in this work. 1751 In the future, we plan to explore several directions. First, although we focus on improving TREC-style open-domain question answering in this work, we would like to apply the proposed technology to other QA scenarios, such as community-based QA (CQA). For instance, the sentence matching technique can help map a given question to some questions in an existing CQA database (e.g., Yahoo! Answers). Moreover, the answer sentence selection scheme could also be useful in extracting the most related sentences from the answer text to form a summary answer. Second, because the task of answer sentence selection is very similar to paraphrase detection (Dolan et al., 2004) and recognizing textual entailment (Dagan et al., 2006), we would like to investigate whether systems for these tasks can be improved by incorporating enhanced lexical semantic knowledge as well. Finally, we would like to improve our system for the answer sentence selection task and for question answering in general. In addition to following the directions suggested by the error analysis presented in Sec. 6.4, we plan to use logic-like semantic representations of questions and sentences, and explore the role of lexical semantics for handling questions that require inference. Acknowledgments We are grateful to Mengqiu Wang for providing the dataset and helping clarify some issues in the experiments. We also thank Chris Burges and Hoifung Poon for valuable discussion and the anonymous reviewers for their useful comments. References E. Agirre, E. Alfonseca, K. Hall, J. Kravalova, M. Pas¸ca and A. Soroa. 2009. A study on similarity and relatedness using distributional and WordNetbased approaches. In Proceedings of NAACL, pages 19–27. M. Bilotti, P. Ogilvie, J. Callan, and E. Nyberg. 2007. Structured retrieval for question answering. In Proceedings of SIGIR, pages 351–358. E. Blanco and D. Moldovan. 2011. Semantic representation of negation using focus detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011). K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In ACM Conference on Management of Data (SIGMOD), pages 1247–1250. A. Budanitsky and G. Hirst. 2006. Evaluating WordNet-based measures of lexical semantic relatedness. Computational Linguistics, 32:13–47, March. M. Chang, D. Goldwasser, D. Roth, and V. Srikumar. 2010. Discriminative learning over constrained latent representations. In Proceedings of NAACL. I. Dagan, O. Glickman, and B. Magnini, editors. 2006. The PASCAL Recognising Textual Entailment Challenge, volume 3944. Springer-Verlag, Berlin. W. Dolan, C. Quirk, and C. Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of COLING. A. Echihabi and D. Marcu. 2003. A noisy-channel approach to question answering. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 16–23. Oren Etzioni. 2011. Search needs a shake-up. Nature, 476(7358):25–26. P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. 2009. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99(1). D. Ferrucci. 2012. Introduction to “This is Watson”. IBM Journal of Research and Development, 56(3.4):1–1. J. Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189–1232. E. Gabrilovich and S. Markovitch. 2007. Computing semantic relatedness using Wikipedia-based explicit semantic analysis. In AAAI Conference on Artificial Intelligence (AAAI). J. Gao, K. Toutanova, and W. Yih. 2011. Clickthrough-based latent semantic models for web search. In Proceedings of SIGIR, pages 675–684. S. Harabagiu and D. Moldovan. 2001. Open-domain textual question answering. Tutorial of NAACL2001. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of COLING, pages 539–545. M. Heilman and N. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1011–1019. 1752 D. Jurgens, S. Mohammad, P. Turney, and K. Holyoak. 2012. SemEval-2012 Task 2: Measuring degrees of relational similarity. In Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 356–364. T. Mikolov, M. Karafi´at, L. Burget, J. Cernock´y, and S. Khudanpur. 2010. Recurrent neural network based language model. In Annual Conference of the International Speech Communication Association (INTERSPEECH), pages 1045–1048. D. Moldovan, M. Pas¸ca, S. Harabagiu, and M. Surdeanu. 2003. Performance issues and error analysis in an open-domain question answering system. ACM Transactions on Information Systems (TOIS), 21(2):133–154. D. Moldovan, C. Clark, S. Harabagiu, and D. Hodges. 2007. COGEX: A semantically and contextually enriched logic prover for question answering. Journal of Applied Logic, 5(1):49–69. R. Morante and E. Blanco. 2012. *SEM 2012 shared task: Resolving the scope and focus of negation. In Proceedings of the First Joint Conference on Lexical and Computational Semantics, pages 265–274. S. Ponzetto and M. Strube. 2007. Deriving a large scale taxonomy from wikipedia. In AAAI Conference on Artificial Intelligence (AAAI). V. Punyakanok, D. Roth, and W. Yih. 2004. Mapping dependencies trees: An application to question answering. In International Symposium on Artificial Intelligence and Mathematics (AI & Math). K. Radinsky, E. Agichtein, E. Gabrilovich, and S. Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In WWW ’11, pages 337–346. J. Reisinger and R. Mooney. 2010. Multi-prototype vector-space models of word meaning. In Proceedings of NAACL. P. Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In International Joint Conference on Artificial Intelligence (IJCAI). B. Rink and S. Harabagiu. 2012. UTD: Determining relational similarity using lexical patterns. In Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 413–418. S. Robertson, S. Walker, S. Jones, M. HancockBeaulieu, and M. Gatford. 1995. Okapi at TREC-3. In Text REtrieval Conference (TREC), pages 109– 109. D. Roth and W. Yih. 2007. Global inference for entity and relation identification via a linear programming formulation. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. D. Shen and M. Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of EMNLP-CoNLL, pages 12–21. D. Smith and J. Eisner. 2006. Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies. In Proceedings of the HLT-NAACL Workshop on Statistical Machine Translation, pages 23–30. Y. Song, H. Wang, Z. Wang, H. Li, and W. Chen. 2011. Short text conceptualization using a probabilistic knowledgebase. In International Joint Conference on Artificial Intelligence (IJCAI), pages 2330–2336. K. Tai. 1979. The tree-to-tree correction problem. J. ACM, 26(3):422–433, July. P. Turney and P. Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37(1):141– 188. E. Voorhees and D. Tice. 2000. Building a question answering test collection. In Proceedings of SIGIR, pages 200–207. M. Wang and C. Manning. 2010. Probabilistic treeedit models with structured latent variables for textual entailment and question answering. In Proceedings of COLING. M. Wang, N. Smith, and T. Mitamura. 2007. What is the Jeopardy model? A quasi-synchronous grammar for QA. In Proceedings of EMNLP-CoNLL. T. Winograd. 1977. Five lectures on artificial intelligence. In A. Zampolli, editor, Linguistic Structures Processing, pages 399–520. North Holland. W. Woods. 1973. Progress in natural language understanding: An application to lunar geology. In Proceedings of the National Computer Conference and Exposition (AFIPS), pages 441–450. W. Wu, H. Li, H. Wang, and K. Zhu. 2012. Probase: a probabilistic taxonomy for text understanding. In ACM Conference on Management of Data (SIGMOD), pages 481–492. W. Yih and V. Qazvinian. 2012. Measuring word relatedness using heterogeneous vector space models. In Proceedings of NAACL-HLT 2012, pages 616–620. W. Yih, K. Toutanova, J. Platt, and C. Meek. 2011. Learning discriminative projections for text similarity measures. In ACL Conference on Natural Language Learning (CoNLL), pages 247–256. W. Yih, G. Zweig, and J. Platt. 2012. Polarity inducing latent semantic analysis. In Proceedings of EMNLPCoNLL, pages 1212–1222. A. Zhila, W. Yih, C. Meek, G. Zweig, and T. Mikolov. 2013. Combining heterogeneous models for measuring relational similarity. In Proceedings of HLTNAACL. 1753
2013
171
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1754–1763, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews Kang Liu, Liheng Xu and Jun Zhao National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences {kliu, lhxu, jzhao}@nlpr.ia.ac.cn Abstract Mining opinion targets is a fundamental and important task for opinion mining from online reviews. To this end, there are usually two kinds of methods: syntax based and alignment based methods. Syntax based methods usually exploited syntactic patterns to extract opinion targets, which were however prone to suffer from parsing errors when dealing with online informal texts. In contrast, alignment based methods used word alignment model to fulfill this task, which could avoid parsing errors without using parsing. However, there is no research focusing on which kind of method is more better when given a certain amount of reviews. To fill this gap, this paper empirically studies how the performance of these two kinds of methods vary when changing the size, domain and language of the corpus. We further combine syntactic patterns with alignment model by using a partially supervised framework and investigate whether this combination is useful or not. In our experiments, we verify that our combination is effective on the corpus with small and medium size. 1 Introduction With the rapid development of Web 2.0, huge amount of user reviews are springing up on the Web. Mining opinions from these reviews become more and more urgent since that customers expect to obtain fine-grained information of products and manufacturers need to obtain immediate feedbacks from customers. In opinion mining, extracting opinion targets is a basic subtask. It is to extract a list of the objects which users express their opinions on and can provide the prior information of targets for opinion mining. So this task has attracted many attentions. To extract opinion targets, pervious approaches usually relied on opinion words which are the words used to express the opinions (Hu and Liu, 2004a; Popescu and Etzioni, 2005; Liu et al., 2005; Wang and Wang, 2008; Qiu et al., 2011; Liu et al., 2012). Intuitively, opinion words often appear around and modify opinion targets, and there are opinion relations and associations between them. If we have known some words to be opinion words, the words which those opinion words modify will have high probability to be opinion targets. Therefore, identifying the aforementioned opinion relations between words is important for extracting opinion targets from reviews. To fulfill this aim, previous methods exploited the words co-occurrence information to indicate them (Hu and Liu, 2004a; Hu and Liu, 2004b). Obviously, these methods cannot obtain precise extraction because of the diverse expressions by reviewers, like long-span modified relations between words, etc. To handle this problem, several methods exploited syntactic information, where several heuristic patterns based on syntactic parsing were designed (Popescu and Etzioni, 2005; Qiu et al., 2009; Qiu et al., 2011). However, the sentences in online reviews usually have informal writing styles including grammar mistakes, typos, improper punctuation etc., which make parsing prone to generate mistakes. As a result, the syntax-based methods which heavily depended on the parsing performance would suffer from parsing errors (Zhang et al., 2010). To improve the extraction performance, we can only employ some exquisite highprecision patterns. But this strategy is likely to miss many opinion targets and has lower recall with the increase of corpus size. To resolve these problems, Liu et al. (2012) formulated identifying opinion relations between words as an monolingual alignment process. A word can find its corresponding modifiers by using a word alignment 1754 Figure 1: Mining Opinion Relations between Words using Partially Supervised Alignment Model model (WAM). Without using syntactic parsing, the noises from parsing errors can be effectively avoided. Nevertheless, we notice that the alignment model is a statistical model which needs sufficient data to estimate parameters. When the data is insufficient, it would suffer from data sparseness and may make the performance decline. Thus, from the above analysis, we can observe that the size of the corpus has impacts on these two kinds of methods, which arises some important questions: how can we make selection between syntax based methods and alignment based method for opinion target extraction when given a certain amount of reviews? And which kind of methods can obtain better extraction performance with the variation of the size of the dataset? Although (Liu et al., 2012) had proved the effectiveness of WAM, they mainly performed experiments on the dataset with medium size. We are still curious about that when the size of dataset is larger or smaller, can we obtain the same conclusion? To our best knowledge, these problems have not been studied before. Moreover, opinions may be expressed in different ways with the variation of the domain and language of the corpus. When the domain or language of the corpus is changed, what conclusions can we obtain? To answer these questions, in this paper, we adopt a unified framework to extract opinion targets from reviews, in the key component of which we vary the methods between syntactic patterns and alignment model. Then we run the whole framework on the corpus with different size (from #500 to #1, 000, 000), domain (three domains) and language (Chinese and English) to empirically assess the performance variations and discuss which method is more effective. Furthermore, this paper naturally addresses another question: is it useful for opinion targets extraction when we combine syntactic patterns and word alignment model into a unified model? To this end, we employ a partially supervised alignment model (PSWAM) like (Gao et al., 2010; Liu et al., 2013). Based on the exquisitely designed high-precision syntactic patterns, we can obtain some precisely modified relations between words in sentences, which provide a portion of links of the full alignments. Then, these partial alignment links can be regarded as the constrains for a standard unsupervised word alignment model. And each target candidate would find its modifier under the partial supervision. In this way, the errors generated in standard unsupervised WAM can be corrected. For example in Figure 1, “kindly” and “courteous” are incorrectly regarded as the modifiers for “foods” if the WAM is performed in an whole unsupervised framework. However, by using some high-precision syntactic patterns, we can assert “courteous” should be aligned to “services”, and “delicious” should be aligned to “foods”. Through combination under partial supervision, we can see “kindly” and “courteous” are correctly linked to “services”. Thus, it’s reasonable to expect to yield better performance than traditional methods. As mentioned in (Liu et al., 2013), using PSWAM can not only inherit the advantages of WAM: effectively avoiding noises from syntactic parsing errors when dealing with informal texts, but also can improve the mining performance by using partial supervision. However, is this kind of combination always useful for opinion target extraction? To access this problem, we also make comparison between PSWAM based method and the aforementioned methods in the same corpora with different size, language and domain. The experimental results show the combination by using PSWAM can be effective on dataset with small and medium size. 1755 2 Related Work Opinion target extraction isn’t a new task for opinion mining. There are much work focusing on this task, such as (Hu and Liu, 2004b; Ding et al., 2008; Li et al., 2010; Popescu and Etzioni, 2005; Wu et al., 2009). Totally, previous studies can be divided into two main categories: supervised and unsupervised methods. In supervised approaches, the opinion target extraction task was usually regarded as a sequence labeling problem (Jin and Huang, 2009; Li et al., 2010; Ma and Wan, 2010; Wu et al., 2009; Zhang et al., 2009). It’s not only to extract a lexicon or list of opinion targets, but also to find out each opinion target mentions in reviews. Thus, the contextual words are usually selected as the features to indicate opinion targets in sentences. And classical sequence labeling models are used to train the extractor, such as CRFs (Li et al., 2010), HMM (Jin and Huang, 2009) etc.. Jin et al. (2009) proposed a lexicalized HMM model to perform opinion mining. Both Li et al. (2010) and Ma et al. (2010) used CRFs model to extract opinion targets in reviews. Specially, Li et al. proposed a Skip-Tree CRF model for opinion target extraction, which exploited three structures including linear-chain structure, syntactic structure, and conjunction structure. However, the main limitation of these supervised methods is the need of labeled training data. If the labeled training data is insufficient, the trained model would have unsatisfied extraction performance. Labeling sufficient training data is time and labor consuming. And for different domains, we need label data independently, which is obviously impracticable. Thus, many researches focused on unsupervised methods, which are mainly to extract a list of opinion targets from reviews. Similar to ours, most approaches regarded opinion words as the indicator for opinion targets. (Hu and Liu, 2004a) regarded the nearest adjective to an noun/noun phrase as its modifier. Then it exploited an association rule mining algorithm to mine the associations between them. Finally, the frequent explicit product features can be extracted in a bootstrapping process by further combining item’s frequency in dataset. Only using nearest neighbor rule to mine the modifier for each candidate cannot obtain precise results. Thus, (Popescu and Etzioni, 2005) used syntax information to extract opinion targets, which designed some syntactic patterns to capture the modified relations between words. The experimental results showed that their method had better performance than (Hu and Liu, 2004a). Moreover, (Qiu et al., 2011) proposed a Double Propagation method to expand sentiment words and opinion targets iteratively, where they also exploited syntactic relations between words. Specially, (Qiu et al., 2011) didn’t only design syntactic patterns for capturing modified relations, but also designed patterns for capturing relations among opinion targets and relations among opinion words. However, the main limitation of Qiu’s method is that the patterns based on dependency parsing tree may miss many targets for the large corpora. Therefore, Zhang et al. (2010) extended Qiu’s method. Besides the patterns used in Qiu’s method, they adopted some other special designed patterns to increase recall. In addition they used the HITS (Kleinberg, 1999) algorithm to compute opinion target confidences to improve the precision. (Liu et al., 2012) formulated identifying opinion relations between words as an alignment process. They used a completely unsupervised WAM to capture opinion relations in sentences. Then the opinion targets were extracted in a standard random walk framework where two factors were considered: opinion relevance and target importance. Their experimental results have shown that WAM was more effective than traditional syntax-based methods for this task. (Liu et al., 2013) extend Liu’s method, which is similar to our method and also used a partially supervised alignment model to extract opinion targets from reviews. We notice these two methods ((Liu et al., 2012) and (Liu et al., 2013)) only performed experiments on the corpora with a medium size. Although both of them proved that WAM model is better than the methods based on syntactic patterns, they didn’t discuss the performance variation when dealing with the corpora with different sizes, especially when the size of the corpus is less than 1,000 and more than 10,000. Based on their conclusions, we still don’t know which kind of methods should be selected for opinion target extraction when given a certain amount of reviews. 3 Opinion Target Extraction Methodology To extract opinion targets from reviews, we adopt the framework proposed by (Liu et al., 2012), which is a graph-based extraction framework and 1756 has two main components as follows. 1) The first component is to capture opinion relations in sentences and estimate associations between opinion target candidates and potential opinion words. In this paper, we assume opinion targets to be nouns or noun phrases, and opinion words may be adjectives or verbs, which are usually adopted by (Hu and Liu, 2004a; Qiu et al., 2011; Wang and Wang, 2008; Liu et al., 2012). And a potential opinion relation is comprised of an opinion target candidate and its corresponding modified word. 2) The second component is to estimate the confidence of each candidate. The candidates with higher confidence scores than a threshold will be extracted as opinion targets. In this procedure, we formulate the associations between opinion target candidates and potential opinion words in a bipartite graph. A random walk based algorithm is employed on this graph to estimate the confidence of each target candidate. In this paper, we fix the method in the second component and vary the algorithms in the first component. In the first component, we respectively use syntactic patterns and unsupervised word alignment model (WAM) to capture opinion relations. In addition, we employ a partially supervised word alignment model (PSWAM) to incorporate syntactic information into WAM. In experiments, we run the whole framework on the different corpora to discuss which method is more effective. In the following subsections, we will present them in detail. 3.1 The First Component: Capturing Opinion Relations and Estimating Associations between Words 3.1.1 Syntactic Patterns To capture opinion relations in sentences by using syntactic patterns, we employ the manual designed syntactic patterns proposed by (Qiu et al., 2011). Similar to Qiu, only the syntactic patterns based on the direct dependency are employed to guarantee the extraction qualities. The direct dependency has two types. The first type indicates that one word depends on the other word without any additional words in their dependency path. The second type denotes that two words both depend on a third word directly. Specifically, we employ Minipar1 to parse sentences. To further make syn1http://webdocs.cs.ualberta.ca/lindek/minipar.htm tactic patterns precisely, we only use a few dependency relation labels outputted by Minipar, such as mod, pnmod, subj, desc etc. To make a clear explanation, we give out some syntactic pattern examples in Table 1. In these patterns, OC is a potential opinion word which is an adjective or a verb. TC is an opinion target candidate which is a noun or noun phrase. The item on the arrows means the dependency relation type. The item in parenthesis denotes the part-of-speech of the other word. In these examples, the first three patterns are based on the first direct dependency type and the last two patterns are based on the second direct dependency type. Pattern#1: <OC> mod −−−→<TC> Example: This phone has an amazing design Pattern#2: <TC> obj −−→<OC> Example: I like this phone very much Pattern#3: <OC> pnmod −−−−→<TC> Example: the buttons easier to use Pattern#4: <OC> mod −−−→(NN) subj ←−−−<TC> Example: IPhone is a revolutionary smart phone Pattern#5: <OC> pred −−−→(VBE) subj ←−−−<TC> Example: The quality of LCD is good Table 1: Some Examples of Used Syntactic Patterns 3.1.2 Unsupervised Word Alignment Model In this subsection, we present our method for capturing opinion relations using unsupervised word alignment model. Similar to (Liu et al., 2012), every sentence in reviews is replicated to generate a parallel sentence pair, and the word alignment algorithm is applied to the monolingual scenario to align a noun/noun phase with its modifiers. We select IBM-3 model (Brown et al., 1993) as the alignment model. Formally, given a sentence S = {w1, w2, ..., wn}, we have Pibm3(A|S) ∝ N Y i=1 n(φi|wi) N Y j=1 t(wj|waj)d(j|aj, N) (1) where t(wj|waj) models the co-occurrence information of two words in dataset. d(j|aj, n) models word position information, which describes the probability of a word in position aj aligned with a word in position j. And n(φi|wi) describes the ability of a word for modifying (being modified by) several words. φi denotes the number of words 1757 that are aligned with wi. In our experiments, we set φi = 2. Since we only have interests on capturing opinion relations between words, we only pay attentions on the alignments between opinion target candidates (nouns/noun phrases) and potential opinion words (adjectives/verbs). If we directly use the alignment model, a noun (noun phrase) may align with other unrelated words, like prepositions or conjunctions and so on. Thus, we set constrains on the model: 1) Alignment links must be assigned among nouns/noun phrases, adjectives/verbs and null words. Aligning to null words means that this word has no modifier or modifies nothing; 2) Other unrelated words can only align with themselves. 3.1.3 Combining Syntax-based Method with Alignment-based Method In this subsection, we try to combine syntactic information with word alignment model. As mentioned in the first section, we adopt a partially supervised alignment model to make this combination. Here, the opinion relations obtained through the high-precision syntactic patterns (Section 3.1.1) are regarded as the ground truth and can only provide a part of full alignments in sentences. They are treated as the constrains for the word alignment model. Given some partial alignment links ˆA = {(k, ak)|k ∈[1, n], ak ∈[1, n]}, the optimal word alignment A∗= {(i, ai)|i ∈ [1, n], ai ∈[1, n]} can be obtained as A∗= argmax A P(A|S, ˆA), where (i, ai) means that a noun (noun phrase) at position i is aligned with its modifier at position ai. Since the labeled data provided by syntactic patterns is not a full alignment, we adopt a EM-based algorithm, named as constrained hill-climbing algorithm(Gao et al., 2010), to estimate the parameters in the model. In the training process, the constrained hill-climbing algorithm can ensure that the final model is marginalized on the partial alignment links. Particularly, in the E step, their method aims to find out the alignments which are consistent to the alignment links provided by syntactic patterns, where there are main two steps involved. 1) Optimize towards the constraints. This step aims to generate an initial alignments for alignment model (IBM-3 model in our method), which can be close to the constraints. First, a simple alignment model (IBM-1, IBM-2, HMM etc.) is trained. Then, the evidence being inconsistent to the partial alignment links will be got rid of by using the move operator operator mi,j which changes aj = i and the swap operator sj1,j2 which exchanges aj1 and aj2. The alignment is updated iteratively until no additional inconsistent links can be removed. 2) Towards the optimal alignment under the constraints. This step aims to optimize towards the optimal alignment under the constraints which starts from the aforementioned initial alignments. Gao et.al. (2010) set the corresponding cost value of the invalid move or swap operation in M and S to be negative, where M and S are respectively called Moving Matrix and Swapping Matrix, which record all possible move and swap costs between two different alignments. In this way, the invalid operators will never be picked which can guarantee that the final alignment links to have high probability to be consistent with the partial alignment links provided by high-precision syntactic patterns. Then in M-step, evidences from the neighbor of final alignments are collected so that we can produce the estimation of parameters for the next iteration. In the process, those statistics which come from inconsistent alignment links aren’t be picked up. Thus, we have P(wi|wai, ˆA) =  λ, otherwise P(wi|wai) + λ, inconsistent with ˆA (2) where λ means that we make soft constraints on the alignment model. As a result, we expect some errors generated through high-precision patterns (Section 3.1.1) may be revised in the alignment process. 3.2 Estimating Associations between Words After capturing opinion relations in sentences, we can obtain a lot of word pairs, each of which is comprised of an opinion target candidate and its corresponding modified word. Then the conditional probabilities between potential opinion target wt and potential opinion word wo can be estimated by using maximum likelihood estimation. Thus, we have P(wt|wo) = Count(wt,wo) Count(wo) , where Count(·) means the item’s frequency information. P(wt|wo) means the conditional probabilities between two words. At the same time, we can obtain conditional probability P(wo|wt). Then, 1758 similar to (Liu et al., 2012), the association between an opinion target candidate and its modifier is estimated as follows. Association(wt, wo) = (α × P(wt|wo) + (1 −α) × P(wo|wt))−1, where α is the harmonic factor. We set α = 0.5 in our experiments. 3.3 The Second Component: Estimating Candidate Confidence In the second component, we adopt a graph-based algorithm used in (Liu et al., 2012) to compute the confidence of each opinion target candidate, and the candidates with higher confidence than the threshold will be extracted as the opinion targets. Here, opinion words are regarded as the important indicators. We assume that two target candidates are likely to belong to the similar category, if they are modified by similar opinion words. Thus, we can propagate the opinion target confidences through opinion words. To model the mined associations between words, a bipartite graph is constructed, which is defined as a weighted undirected graph G = (V, E, W). It contains two kinds of vertex: opinion target candidates and potential opinion words, respectively denoted as vt ∈V and vo ∈V . As shown in Figure 2, the white vertices represent opinion target candidates and the gray vertices represent potential opinion words. An edge evt,vo ∈E between vertices represents that there is an opinion relation, and the weight w on the edge represents the association between two words. Figure 2: Modeling Opinion Relations between Words in a Bipartite Graph To estimate the confidence of each opinion target candidate, we employ a random walk algorithm on our graph, which iteratively computes the weighted average of opinion target confidences from neighboring vertices. Thus we have Ci+1 = (1 −β) × M × MT × Ci + β × I (3) where Ci+1 and Ci respectively represent the opinion target confidence vector in the (i + 1)th and ith iteration. M is the matrix of word associations, where Mi,j denotes the association between the opinion target candidate i and the potential opinion word j. And I is defined as the prior confidence of each candidate for opinion target. Similar to (Liu et al., 2012), we set each item in Iv = tf(v)idf(v) P v tf(v)idf(v), where tf(v) is the term frequency of v in the corpus, and df(v) is computed by using the Google n-gram corpus2. β ∈[0, 1] represents the impact of candidate prior knowledge on the final estimation results. In experiments, we set β = 0.4. The algorithm run until convergence which is achieved when the confidence on each node ceases to change in a tolerance value. 4 Experiments 4.1 Datasets and Evaluation Metrics In this section, to answer the questions mentioned in the first section, we collect a large collection named as LARGE, which includes reviews from three different domains and different languages. This collection was also used in (Liu et al., 2012). In the experiments, reviews are first segmented into sentences according to punctuation. The detailed statistical information of the used collection is shown in Table 2, where Restaurant is crawled from the Chinese Web site: www.dianping.com. The Hotel and MP3 are used in (Wang et al., 2011), which are respectively crawled from www.tripadvisor.com and www.amazon.com. For each dataset, we perform random sampling to generate testing set with different sizes, where we use sampled subsets with #sentences = 5 × 102, 103, 5 × 103, 104, 5 × 104, 105 and 106 sentences respectively. Each Domain Language Sentence Reviews Restaurant Chinese 1,683,129 395,124 Hotel English 1,855,351 185,829 MP3 English 289,931 30,837 Table 2: Experimental Dataset sentence is tokenized, part-of-speech tagged by using Stanford NLP tool3, and parsed by using Minipar toolkit. And the method of (Zhu et al., 2009) is used to identify noun phrases. 2http://books.google.com/ngrams/datasets 3http://nlp.stanford.edu/software/tagger.shtml 1759 We select precision and recall as the metrics. Specifically, to obtain the ground truth, we manually label all opinion targets for each subset. In this process, three annotators are involved. First, every noun/noun phrase and its contexts in review sentences are extracted. Then two annotators were required to judge whether every noun/noun phrase is opinion target or not. If a conflict happens, a third annotator will make judgment for final results. The average inter-agreements is 0.74. We also perform a significant test, i.e., a t-test with a default significant level of 0.05. 4.2 Compared Methods We select three methods for comparison as follows. • Syntax: It uses syntactic patterns mentioned in Section 3.1.1 in the first component to capture opinion relations in reviews. Then the associations between words are estimated and the graph based algorithm proposed in the second component (Section 3.3) is performed to extract opinion targets. • WAM: It is similar to Syntax, where the only difference is that WAM uses unsupervised WAM (Section 3.1.2) to capture opinion relations. • PSWAM is similar to Syntax and WAM, where the difference is that PSWAM uses the method mentioned in Section 3.1.3 to capture opinion relations, which incorporates syntactic information into word alignment model by using partially supervised framework. The experimental results on different domains are respectively shown in Figure 3, 4 and 5. 4.3 Syntax based Methods vs. Alignment based Methods Comparing Syntax with WAM and PSWAM, we can obtain the following observations: Figure 3: Experimental results on Restaurant Figure 4: Experimental results on Hotel Figure 5: Experimental results on MP3 1) When the size of the corpus is small, Syntax has better precision than alignment based methods (WAM and PSWAM). We believe the reason is that the high-precision syntactic patterns employed in Syntax can effectively capture opinion relations in a small amount of texts. In contrast, the methods based on word alignment model may suffer from data sparseness for parameter estimation, so the precision is lower. 2) However, when the size of the corpus increases, the precision of Syntax decreases, even worse than alignment based methods. We believe it’s because more noises were introduced from parsing errors with the increase of the size of the corpus , which will have more negative impacts on extraction results. In contrast, for estimating the parameters of alignment based methods, the data is more sufficient, so the precision is better compared with syntax based method. 3) We also observe that recall of Syntax is worse than other two methods. It’s because the human expressions of opinions are diverse and the manual designed syntactic patterns are limited to capture all opinion relations in sentences, which may miss an amount of correct opinion targets. 4) It’s interesting that the performance gap between these three methods is smaller with the increase of the size of the corpus (more than 50,000). We guess the reason is that when the data is sufficient enough, we can obtain sufficient statistics for each opinion target. In such situation, the graphbased ranking algorithm in the second component will be apt to be affected by the frequency information, so the final performance could not be sensitive to the performance of opinion relations iden1760 tification in the first component. Thus, in this situation, we can get conclusion that there is no obviously difference on performance between syntaxbased approach and alignment-based approach. 5) From the results on dataset with different languages and different domains, we can obtain the similar observations. It indicates that choosing either syntactic patterns or word alignment model for extracting opinion targets can take a few consideration on the language and domain of the corpus. Thus, based on the above observations, we can draw the following conclusions: making chooses between different methods is only related to the size of the corpus. The method based on syntactic patterns is more suitable for small corpus (#sentences < 5 × 103 shown in our experiments). And word alignment model is more suitable for medium corpus (5 × 103 < #sentences < 5 × 104). Moreover, when the size of the corpus is big enough, the performance of two kinds of methods tend to become the same (#sentences ≥105 shown in our experiments). 4.4 Is It Useful Combining Syntactic Patterns with Word Alignment Model In this subsection, we try to see whether combining syntactic information with alignment model by using PSWAM is effective or not for opinion target extraction. From the results in Figure 3, 4 and 5, we can see that PSWAM has the similar recall compared with WAM in all datasets. PSWAM outperforms WAM on precision in all dataset. But the precision gap between PSWAM and WAM decreases when the size of the corpus increases. When the size is larger than 5 × 104, the performance of these two methods is almost the same. We guess the reason is that more noises from parsing errors will be introduced by syntactic patterns with the increase of the size of corpus , which have negative impacts on alignment performance. At the same time, as mentioned above, a great deal of reviews will bring sufficient statistics for estimating parameters in alignment model, so the roles of partial supervision from syntactic information will be covered by frequency information used in our graph based ranking algorithm. Compared with State-of-the-art Methods. However, it’s not say that this combination is not useful. From the results, we still see that PSWAM outperforms WAM in all datasets on precision when size of corpus is smaller than 5 × 104. To further prove the effectiveness of our combination, we compare PSWAM with some state-of-the-art methods, including Hu (Hu and Liu, 2004a), which extracted frequent opinion target words based on association mining rules, DP (Qiu et al., 2011), which extracted opinion targets through syntactic patterns, and LIU (Liu et al., 2012), which fulfilled this task by using unsupervised WAM. The parameter settings in these baselines are the same as the settings in the original papers. Because of the space limitation, we only show the results on Restaurant and Hotel, as shown in Figure 6 and 7. Figure 6: Compared with the State-of-the-art Methods on Restaurant Figure 7: Compared with the State-of-the-art Methods on Hotel From the experimental results, we can obtain the following observations. PSWAM outperforms other methods in most datasets. This indicates that our method based on PSWAM is effective for opinion target extraction. Especially compared PSWAM with LIU, both of which are based on word alignment model, we can see PSWAM identifies opinion relations by performing WAM under partial supervision, which can effectively improve the precision when dealing with small and medium corpus. However, these improvements are limited when the size of the corpus increases, which has the similar observations obtained above. The Impact of Syntactic Information on Word Alignment Model. Although we have prove the effectiveness of PSWAM in the corpus with small and medium size, we are still curious about how the performance varies when we incor1761 porate different amount of syntactic information into WAM. In this experiment, we rank the used syntactic patterns mentioned in Section 3.1.1 according to the quantities of the extracted alignment links by these patterns. Then, to capture opinion relations, we respectively use top N syntactic patterns according to frequency mentioned above to generate partial alignment links for PSWAM in section 3.1.3. We respectively define N=[1,7]. The larger is N , the more syntactic information is incorporated. Because of the space limitation, only the average performance of all dataset is shown in Figure 8. Figure 8: The Impacts of Different Syntactic Information on Word Alignment Model In Figure 8, we can observe that the syntactic information mainly have effect on precision. When the size of the corpus is small, the opinion relations mined by high-precision syntactic patterns are usually correct, so incorporating more syntactic information can improve the precision of word alignment model more. However, when the size of the corpus increases, incorporating more syntactic information has little impact on precision. 5 Conclusions and Future Work This paper discusses the performance variation of syntax based methods and alignment based methods on opinion target extraction task for the dataset with different sizes, different languages and different domains. Through experimental results, we can see that choosing which method is not related with corpus domain and language, but strongly associated with the size of the corpus . We can conclude that syntax-based method is likely to be more effective when the size of the corpus is small, and alignment-based methods are more useful for the medium size corpus. We further verify that incorporating syntactic information into word alignment model by using PSWAM is effective when dealing with the corpora with small or medium size. When the size of the corpus is larger and larger, the performance gap between syntax based, WAM and PSWAM will decrease. In future work, we will extract opinion targets based on not only opinion relations. Other semantic relations, such as the topical associations between opinion targets (or opinion words) should also be employed. We believe that considering multiple semantic associations will help to improve the performance. In this way, how to model heterogenous relations in a unified model for opinion targets extraction is worthy to be studied. Acknowledgement This work was supported by the National Natural Science Foundation of China (No. 61070106, No. 61272332 and No. 61202329), the National High Technology Development 863 Program of China (No. 2012AA011102), the National Basic Research Program of China (No. 2012CB316300), Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research (ICDD201201). References Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Linguist., 19(2):263– 311, June. Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the Conference on Web Search and Web Data Mining (WSDM). Qin Gao, Nguyen Bach, and Stephan Vogel. 2010. A semi-supervised word alignment algorithm with partial manual alignments. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 1–10, Uppsala, Sweden, July. Association for Computational Linguistics. 1762 Mingqin Hu and Bing Liu. 2004a. Mining opinion features in customer reviews. In Proceedings of Conference on Artificial Intelligence (AAAI). Minqing Hu and Bing Liu. 2004b. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’04, pages 168–177, New York, NY, USA. ACM. Wei Jin and Hay Ho Huang. 2009. A novel lexicalized hmm-based learning framework for web opinion mining. In Proceedings of International Conference on Machine Learning (ICML). Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604–632, September. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Yingju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In Chu-Ren Huang and Dan Jurafsky, editors, COLING, pages 653–661. Tsinghua University Press. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opinions on the web. In Allan Ellis and Tatsuya Hagino, editors, WWW, pages 342–351. ACM. Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1346–1356, Jeju Island, Korea, July. Association for Computational Linguistics. Kang Liu, Liheng Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially supervised word alignment model. Tengfei Ma and Xiaojun Wan. 2010. Opinion target extraction in chinese news comments. In ChuRen Huang and Dan Jurafsky, editors, COLING (Posters), pages 782–790. Chinese Information Processing Society of China. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 339–346, Stroudsburg, PA, USA. Association for Computational Linguistics. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Che. 2009. Expanding domain sentiment lexicon through double propagation. Guang Qiu, Bing Liu 0001, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics, 37(1):9–27. Bo Wang and Houfeng Wang. 2008. Bootstrapping both product features and opinion words from chinese customer reviews with cross-inducing. Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect keyword supervision. In Chid Apt, Joydeep Ghosh, and Padhraic Smyth, editors, KDD, pages 618–626. ACM. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In EMNLP, pages 1533–1541. ACL. Qi Zhang, Yuanbin Wu, Tao Li, Mitsunori Ogihara, Joseph Johnson, and Xuanjing Huang. 2009. Mining product reviews based on shallow dependency parsing. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’09, pages 726–727, New York, NY, USA. ACM. Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In ChuRen Huang and Dan Jurafsky, editors, COLING (Posters), pages 1462–1470. Chinese Information Processing Society of China. Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In David Wai-Lok Cheung, Il-Yeol Song, Wesley W. Chu, Xiaohua Hu, and Jimmy J. Lin, editors, CIKM, pages 1799–1802. ACM. 1763
2013
172
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1764–1773, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Mining Opinion Words and Opinion Targets in a Two-Stage Framework Liheng Xu, Kang Liu, Siwei Lai, Yubo Chen and Jun Zhao National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China {lhxu, kliu, swlai, ybchen, jzhao}@nlpr.ia.ac.cn Abstract This paper proposes a novel two-stage method for mining opinion words and opinion targets. In the first stage, we propose a Sentiment Graph Walking algorithm, which naturally incorporates syntactic patterns in a Sentiment Graph to extract opinion word/target candidates. Then random walking is employed to estimate confidence of candidates, which improves extraction accuracy by considering confidence of patterns. In the second stage, we adopt a self-learning strategy to refine the results from the first stage, especially for filtering out high-frequency noise terms and capturing the long-tail terms, which are not investigated by previous methods. The experimental results on three real world datasets demonstrate the effectiveness of our approach compared with stateof-the-art unsupervised methods. 1 Introduction Opinion mining not only assists users to make informed purchase decisions, but also helps business organizations understand and act upon customer feedbacks on their products or services in real-time. Extracting opinion words and opinion targets are two key tasks in opinion mining. Opinion words refer to those terms indicating positive or negative sentiment. Opinion targets represent aspects or attributes of objects toward which opinions are expressed. Mining these terms from reviews of a specific domain allows a more thorough understanding of customers’ opinions. Opinion words and opinion targets often cooccur in reviews and there exist modified relations (called opinion relation in this paper) between them. For example, in the sentence “It has a clear screen”, “clear” is an opinion word and “screen” is an opinion target, and there is an opinion relation between the two words. It is natural to identify such opinion relations through common syntactic patterns (also called opinion patterns in this paper) between opinion words and targets. For example, we can extract “clear” and “screen” by using a syntactic pattern “Adj-{mod}-Noun”, which captures the opinion relation between them. Although previous works have shown the effectiveness of syntactic patterns for this task (Qiu et al., 2009; Zhang et al., 2010), they still have some limitations as follows. False Opinion Relations: As an example, the phrase “everyday at school” can be matched by a pattern “Adj-{mod}-(Prep)-{pcomp-n}-Noun”, but it doesn’t bear any sentiment orientation. We call such relations that match opinion patterns but express no opinion false opinion relations. Previous pattern learning algorithms (Zhuang et al., 2006; Kessler and Nicolov, 2009; Jijkoun et al., 2010) often extract opinion patterns by frequency. However, some high-frequency syntactic patterns can have very poor precision (Kessler and Nicolov, 2009). False Opinion Targets: In another case, the phrase “wonderful time” can be matched by an opinion pattern “Adj-{mod}-Noun”, which is widely used in previous works (Popescu and Etzioni, 2005; Qiu et al., 2009). As can be seen, this phrase does express a positive opinion but unfortunately “time” is not a valid opinion target for most domains such as MP3. Thus, false opinion targets are extracted. Due to the lack of ground-truth knowledge for opinion targets, non-target terms introduced in this way can be hardly filtered out. Long-tail Opinion Targets: We further notice that previous works prone to extract opinion targets with high frequency (Hu and Liu, 2004; Popescu and Etzioni, 2005; Qiu et al., 2009; Zhu et al., 2009), and they often have difficulty in identifying the infrequent or long-tail opinion targets. 1764 To address the problems stated above, this paper proposes a two-stage framework for mining opinion words and opinion targets. The underlying motivation is analogous to the novel idea “Mine the Easy, Classify the Hard” (Dasgupta and Ng, 2009). In our first stage, we propose a Sentiment Graph Walking algorithm to cope with the false opinion relation problem, which mines easy cases of opinion words/targets. We speculate that it may be helpful to introduce a confidence score for each pattern. Concretely, we create a Sentiment Graph to model opinion relations among opinion word/target/pattern candidates and apply random walking to estimate confidence of them. Thus, confidence of pattern is considered in a unified process. Patterns that often extract false opinion relations will have low confidence, and terms introduced by low-confidence patterns will also have low confidence accordingly. This could potentially improve the extraction accuracy. In the second stage, we identify the hard cases, which aims to filter out false opinion targets and extract long-tail opinion targets. Previous supervised methods have been shown to achieve stateof-the-art results for this task (Wu et al., 2009; Jin and Ho, 2009; Li et al., 2010). However, the big challenge for fully supervised method is the lack of annotated training data. Therefore, we adopt a self-learning strategy. Specifically, we employ a semi-supervised classifier to refine the target results from the first stage, which uses some highly confident target candidates as the initial labeled examples. Then opinion words are also refined. Our main contributions are as follows: • We propose a Sentiment Graph Walking algorithm to mine opinion words and opinion targets from reviews, which naturally incorporates confidence of syntactic pattern in a graph to improve extraction performance. To our best knowledge, the incorporation of pattern confidence in such a Sentiment Graph has never been studied before for opinion words/targets mining task (Section 3). • We adopt a self-learning method for refining opinion words/targets generated by Sentiment Graph Walking. Specifically, it can remove high-frequency noise terms and capture longtail opinion targets in corpora (Section 4). • We perform experiments on three real world datasets, which demonstrate the effectiveness of our method compared with state-of-the-art unsupervised methods (Section 5). 2 Related Work In opinion words/targets mining task, most unsupervised methods rely on identifying opinion relations between opinion words and opinion targets. Hu and Liu (2004) proposed an association mining technique to extract opinion words/targets. The simple heuristic rules they used may potentially introduce many false opinion words/targets. To identify opinion relations more precisely, subsequent research work exploited syntax information. Popescu and Etzioni (2005) used manually complied syntactic patterns and Pointwise Mutual Information (PMI) to extract opinion words/targets. Qiu et al. (2009) proposed a bootstrapping framework called Double Propagation which introduced eight heuristic syntactic rules. While manually defining syntactic patterns could be timeconsuming and error-prone, we learn syntactic patterns automatically from data. There have been extensive works on mining opinion words and opinion targets by syntactic pattern learning. Riloff and Wiebe (2003) performed pattern learning through bootstrapping while extracting subjective expressions. Zhuang et al. (2006) obtained various dependency relationship templates from an annotated movie corpus and applied them to supervised opinion words/targets extraction. Kobayashi et al. (2007) adopted a supervised learning technique to search for useful syntactic patterns as contextual clues. Our approach is similar to (Wiebe and Riloff, 2005) and (Xu et al., 2013), all of which apply syntactic pattern learning and adopt self-learning strategy. However, the task of (Wiebe and Riloff, 2005) was to classify sentiment orientations in sentence level, while ours needs to extract more detailed information in term level. In addition, our method extends (Xu et al., 2013), and we give a more complete and in-depth analysis on the aforementioned problems in the first section. There were also many works employed graphbased method (Li et al., 2012; Zhang et al., 2010; Hassan and Radev, 2010; Liu et al., 2012), but none of previous works considered confidence of patterns in the graph. In supervised approaches, various kinds of models were applied, such as HMM (Jin and Ho, 2009), SVM (Wu et al., 2009) and CRFs (Li et al., 2010). The downside of supervised methods was the difficulty of obtaining annotated training data in practical applications. Also, classifiers trained 1765 on one domain often fail to give satisfactory results when shifted to another domain. Our method does not rely on annotated training data. 3 The First Stage: Sentiment Graph Walking Algorithm In the first stage, we propose a graph-based algorithm called Sentiment Graph Walking to mine opinion words and opinion targets from reviews. 3.1 Opinion Pattern Learning for Candidates Generation For a given sentence, we first obtain its dependency tree. Following (Hu and Liu, 2004; Popescu and Etzioni, 2005; Qiu et al., 2009), we regard all adjectives as opinion word candidates (OC) and all nouns or noun phrases as opinion target candidates (TC). A statistic-based method in (Zhu et al., 2009) is used to detect noun phrases. Then candidates are replaced by wildcards “<OC>” or “<TC>”. Figure 1 gives a dependency tree example generated by Minipar (Lin, 1998). pred s det m od gor geo us <OC> is (VBE) style <TC> the (Det) of (Pr ep) scr een <TC> pcom p-n the (Det) det Figure 1: The dependency tree of the sentence “The style of the screen is gorgeous”. We extract two kinds of opinion patterns: “OCTC” pattern and “TC-TC” pattern. The “OCTC” pattern is the shortest path between an OC wildcard and a TC wildcard in dependency tree, which captures opinion relation between an opinion word candidate and an opinion target candidate. Similarly, the “TC-TC” pattern captures opinion relation between two opinion target candidates.1 Words in opinion patterns are replaced by their POS tags, and we constrain that there are at most two words other than wildcards in each pattern. In Figure 1, there are two opinion patterns marked out by dash lines: “<OC>{pred}(VBE){s}<TC>” for the “OC-TC” type and “<TC>{mod}(Prep){pcompn}<TC>” for the “TC-TC” type. After all pat1We do not identify the opinion relation “OC-OC” because this relation is often unreliable. terns are generated, we drop those patterns with frequency lower than a threshold F. 3.2 Sentiment Graph Construction To model the opinion relations among opinion words/targets and opinion patterns, a graph named as Sentiment Graph is constructed, which is a weighted, directed graph G = (V, E, W), where • V = {Voc ∪Vtc ∪Vp} is the set of vertices in G, where Voc, Vtc and Vp represent the set of opinion word candidates, opinion target candidates and opinion patterns, respectively. • E = {Epo ∪Ept} ⊆{Vp ×Voc}∪{Vp ×Vtc} is the weighted, bi-directional edge set in G, where Epo and Ept are mutually exclusive sets of edges connecting opinion word/target vertices to opinion pattern vertices. Note that there are no edges between Voc and Vtc. • W : E →R+ is the weight function which assigns non-negative weight to each edge. For each (e : va →vb) ∈E, where va, vb ∈V , the weight function w(va, vb) = freq(va, vb)/freq(va), where freq(·) is the frequency of a candidate extracted by opinion patterns or co-occurrence frequency between two candidates. Figure 2 shows an example of Sentiment Graph. n ice large screen display <OC>{mod}<TC> <OC>{mod}<TC>{con j}<TC> 1 0.8 0.7 0.2 0.3 0.4 0.2 0.33 0.33 0.33 0.6 0.4 0.2 0.2 Figure 2: An example of Sentiment Graph. 3.3 Confidence Estimation by Random Walking with Restart We believe that considering confidence of patterns can potentially improve the extraction accuracy. Our intuitive idea is: (i) If an opinion word/target is with higher confidence, the syntactic patterns containing this term are more likely to be used to express customers’ opinion. (ii) If an opinion pattern has higher confidence, terms extracted by this pattern are more likely to be correct. It’s a reinforcement process. 1766 We use Random Walking with Restart (RWR) algorithm to implement our idea described above. Let Moc p denotes the transition matrix from Voc to Vp, for vo ∈Voc, vp ∈Vp, Moc p(vo, vp) = w(vo, vp). Similarly, we have Mtc p, Mp oc, Mp tc. Let c denotes confidence vector of candidates so ct oc, ct tc and ct p are confidence vectors for opinion word/target/pattern candidates after walking t steps. Initially c0 oc is uniformly distributed on a few domain-independent opinion word seeds, then the following formula are updated iteratively until ct tc and ct oc converge: ct+1 p = MT oc p × ct oc + MT tc p × ct tc (1) ct+1 oc = (1 −λ)MT p oc × ct p + λc0 oc (2) ct+1 tc = MT p tc × ct p (3) where MT is the transpose of matrix M and λ is a small probability of teleporting back to the seed vertices which prevents us from walking too far away from the seeds. In the experiments below, λ is set 0.1 empirically. 4 The Second Stage: Refining Extracted Results Using Self-Learning At the end of the first stage, we obtain a ranked list of opinion words and opinion targets, in which higher ranked terms are more likely to be correct. Nevertheless, there are still some issues needed to be addressed: 1) In the target candidate list, some highfrequency frivolous general nouns such as “thing” and “people” are also highly ranked. This is because there exist many opinion expressions containing non-target terms such as “good thing”, “nice people”, etc. in reviews. Due to the lack of ground-truth knowledge for opinion targets, the false opinion target problem still remains unsolved. 2) In another aspect, long-tail opinion targets may have low degree in Sentiment Graph. Hence their confidence will be low although they may be extracted by some high quality patterns. Therefore, the first stage is incapable of dealing with the long-tail opinion target problem. 3) Furthermore, the first stage also extracts some high-frequency false opinion words such as “every”, “many”, etc. Many terms of this kind are introduced by high-frequency false opinion targets, for there are large amounts of phrases like “every time” and “many people”. So this issue is a side effect of the false opinion target problem. To address these issues, we exploit a selflearning strategy. For opinion targets, we use a semi-supervised binary classifier called target refining classifier to refine target candidates. For opinion words, we use the classified list of opinion targets to further refine the extracted opinion word candidates. 4.1 Opinion Targets Refinement There are two keys for opinion target refinement: (i) How to generate the initial labeled data for target refining classifier. (ii) How to properly represent a long-tail opinion target candidate other than comparing frequency between different targets. For the first key, it is clearly improper to select high-confidence targets as positive examples and choose low-confidence targets as negative examples2, for there are noise with high confidence and long-tail targets with low confidence. Fortunately, a large proportion of general noun noises are the most frequent words in common texts. Therefore, we can generate a small domain-independent general noun (GN) corpus from large web corpora to cover some most frequently used general noun examples. Then labeled examples can be drawn from the target candidate list and the GN corpus. For the second key, we utilize opinion words and opinion patterns with their confidence scores to represent an opinion target. By this means, a long-tail opinion target can be determined by its own contexts, whose weights are learnt from contexts of frequent opinion targets. Thus, if a longtail opinion target candidate has high contextual support, it will have higher probability to be found out in despite of its low frequency. Creation of General Noun Corpora. 1000 most frequent nouns in Google-1-gram3 were selected as general noun candidates. On the other hand, we added all nouns in the top three levels of hyponyms in four WordNet (Miller, 1995) synsets “object”, “person”, “group” and “measure” into the GN corpus. Our idea was based on the fact that a term is more general when it sits in higher level in the WordNet hierarchy. Then inapplicable candidates were discarded and a 3071-word English 2Note that the “positive” and “negative” here denote opinion targets and non-target terms respectively and they do not indicate sentiment polarities. 3http://books.google.com/ngrams. 1767 GN corpus was created. Another Chinese GN corpus with 3493 words was generated in the similar way from HowNet (Gan and Wong, 2000). Generation of Labeled Examples. Let T = {Y+1, Y−1} denotes the initial labeled set, where N most highly confident target candidates but not in our GN corpora are regarded as the positive example set Y+1, other N terms from GN corpora which are also top ranked in the target list are selected as the negative example set Y−1. The reminder unlabeled candidates are denoted by T ∗. Feature Representation for Classifier. Given T and T ∗in the form of {(xi, yi)}. For a target candidate ti, xi = (o1, . . . , on, p1, . . . , pm)T represents its feature vector, where oj is the opinion word feature and pk is the opinion pattern feature. The value of feature is defined as follows, x(oj) = conf(oj) × P pk freq(ti, oj, pk) freq(oj) (4) x(pk) = conf(pk) × P oj freq(ti, oj, pk) freq(pk) (5) where conf(·) denotes confidence score estimated by RWR, freq(·) has the same meaning as in Section 3.2. Particularly, freq(ti, oj, pk) represents the frequency of pattern pk extracting opinion target ti and opinion word oj. Target Refinement Classifier: We use support vector machine as the binary classifier. Hence, the classification problem can be formulated as to find a hyperplane < w, b > that separates both labeled set T and unlabeled set T ∗with maximum margin. The optimization goal is to minimize over (T , T ∗, w, b, ξ1, ..., ξn, ξ∗ 1, ..., ξ∗ k): 1 2||w||2 + C n X i=0 ξi + C∗ k X j=0 ξ∗ j subject to : ∀n i=1 : yi[w · xi + b] ≥1 −ξi ∀k j=1 : y∗ j [w · x∗ j + b] ≥1 −ξ∗ j ∀n i=1 : ξi > 0 ∀k j=1 : ξ∗ j > 0 where yi, y∗ j ∈{+1, −1}, xi and x∗ j represent feature vectors, C and C∗are parameters set by user. This optimization problem can be implemented by a typical Transductive Support Vector Machine (TSVM) (Joachims, 1999). 4.2 Opinion Words Refinement We use the classified opinion target results to refine opinion words by the following equation, s(oj) = X ti∈T X pk s(ti)conf(pk)freq(ti, oj, pk) freq(ti) where T is the opinion target set in which each element is classified as positive during opinion target refinement, s(ti) denotes confidence score exported by the target refining classifier. Particularly, freq(ti) = P oj P pk freq(ti, oj, pk). A higher score of s(oj) means that candidate oj is more likely to be an opinion word. 5 Experiments 5.1 Datasets and Evaluation Metrics Datasets: We select three real world datasets to evaluate our approach. The first one is called Customer Review Dataset (CRD) (Hu and Liu, 2004) which contains reviews on five different products (represented by D1 to D5) in English. The second dataset is pre-annotated and published in COAE084, where two domains of Chinese reviews are selected. At last, we employ a benchmark dataset in (Wang et al., 2011) and named it as Large. We manually annotated opinion words and opinion targets as the gold standard. Three annotators were involved. Firstly, two annotators were required to annotate out opinion words and opinion targets in sentences. When conflicts happened, the third annotator would make the final judgment. The average Kappa-values of the two domains were 0.71 for opinion words and 0.66 for opinion targets. Detailed information of our datasets is shown in Table 1. Dataset Domain #Sentences #OW #OT Large (English) Hotel 10,000 434 1,015 MP3 10,000 559 1,158 COAE08 (Chinese) Camera 2,075 351 892 Car 4,783 622 1,179 Table 1: The detailed information of datasets. OW stands for opinion words and OT stands for targets. Pre-processing: Firstly, HTML tags are removed from texts. Then Minipar (Lin, 1998) is used to parse English corpora, and Standford Parser (Chang et al., 2009) is used for Chinese 4http://ir-china.org.cn/coae2008.html 1768 Methods D1 D2 D3 D4 D5 Avg. P R F P R F P R F P R F P R F F Hu 0.75 0.82 0.78 0.71 0.79 0.75 0.72 0.76 0.74 0.69 0.82 0.75 0.74 0.80 0.77 0.76 DP 0.87 0.81 0.84 0.90 0.81 0.85 0.90 0.86 0.88 0.81 0.84 0.82 0.92 0.86 0.89 0.86 Zhang 0.83 0.84 0.83 0.86 0.85 0.85 0.86 0.88 0.87 0.80 0.85 0.82 0.86 0.86 0.86 0.85 Ours-Stage1 0.79 0.85 0.82 0.82 0.87 0.84 0.83 0.87 0.85 0.78 0.88 0.83 0.82 0.88 0.85 0.84 Ours-Full 0.86 0.82 0.84 0.88 0.83 0.85 0.89 0.86 0.87 0.83 0.86 0.84 0.89 0.85 0.87 0.86 Table 2: Results of opinion target extraction on the Customer Review Dataset. Methods D1 D2 D3 D4 D5 Avg. P R F P R F P R F P R F P R F F Hu 0.57 0.75 0.65 0.51 0.76 0.61 0.57 0.73 0.64 0.54 0.62 0.58 0.62 0.67 0.64 0.62 DP 0.64 0.73 0.68 0.57 0.79 0.66 0.65 0.70 0.67 0.61 0.65 0.63 0.70 0.68 0.69 0.67 Ours-Stage1 0.61 0.75 0.67 0.55 0.80 0.65 0.63 0.75 0.68 0.60 0.69 0.64 0.68 0.70 0.69 0.67 Ours-Full 0.64 0.74 0.69 0.59 0.79 0.68 0.66 0.71 0.68 0.65 0.67 0.66 0.72 0.67 0.69 0.68 Table 3: Results of opinion word extraction on the Customer Review Dataset. corpora. Stemming and fuzzy matching are also performed following previous work (Hu and Liu, 2004). Evaluation Metrics: We evaluate our method by precision(P), recall(R) and F-measure(F). 5.2 Our Method vs. the State-of-the-art Three state-of-the-art unsupervised methods are used as competitors to compare with our method. Hu extracts opinion words/targets by using adjacency rules (Hu and Liu, 2004). DP uses a bootstrapping algorithm named as Double Propagation (Qiu et al., 2009). Zhang is an enhanced version of DP and employs HITS algorithm (Kleinberg, 1999) to rank opinion targets (Zhang et al., 2010). Ours-Full is the full implementation of our method. We employ SVMlight (Joachims, 1999) as the target refining classifier. Default parameters are used except the bias item is set 0. Ours-Stage1 only uses Sentiment Graph Walking algorithm which does’t have opinion word and opinion target refinement. All of the above approaches use same five common opinion word seeds. The choice of opinion seeds seems reasonable, as most people can easily come up with 5 opinion words such as “good”, “bad”, etc. The performance on five products of CRD dataset is shown in Table 2 and Table 3. Zhang does not extract opinion words so their results for opinion words are not taken into account. We can see that Ours-Stage1 achieves superior recall but has some loss in precision compared with DP and Zhang. This may be because the CRD dataset is too small and our statisticbased method may suffer from data sparseness. In spite of this, Ours-Full achieves comparable Fmeasure with DP, which is a well-designed rulebased method. The results on two larger datasets are shown in Table 4 and Table 5, from which we can have the following observation: (i) All syntax-basedmethods outperform Hu, showing the importance of syntactic information in opinion relation identification. (ii) Ours-Full outperforms the three competitors on all domains provided. (iii) Ours-Stage1 outperforms Zhang, especially in terms of recall. We believe it benefits from our automatical pattern learning algorithm. Moreover, Ours-Stage1 do not loss much in precision compared with Zhang, which indicates the applicability to estimate pattern confidence in Sentiment Graph. (iv) OursFull achieves 4-9% improvement in precision over the most accurate method, which shows the effectiveness of our second stage. 5.3 Detailed Discussions This section gives several variants of our method to have a more detailed analysis. Ours-Bigraph constructs a bi-graph between opinion words and targets, so opinion patterns are not included in the graph. Then RWR algorithm is used to only assign confidence to opinion word/target candidates. Ours-Stage2 only contains the second stage, which doesn’t apply Sentiment Graph Walking algorithm. Hence the confidence score conf(·) in Equations (4) and (5) have no values and they are set to 1. The initial labeled examples are exactly the same as Ours-Full. Due to the limitation of space, we only give analysis on opinion target extraction results in Figure 3. 1769 Methods MP3 Hotel Camera Car Avg. P R F P R F P R F P R F F Hu 0.53 0.55 0.54 0.55 0.57 0.56 0.63 0.65 0.64 0.62 0.58 0.60 0.58 DP 0.66 0.57 0.61 0.66 0.60 0.63 0.71 0.70 0.70 0.72 0.65 0.68 0.66 Zhang 0.65 0.62 0.63 0.64 0.66 0.65 0.71 0.78 0.74 0.69 0.68 0.68 0.68 Ours-Stage1 0.62 0.68 0.65 0.63 0.71 0.67 0.69 0.80 0.74 0.66 0.71 0.68 0.69 Ours-Full 0.73 0.71 0.72 0.75 0.73 0.74 0.78 0.81 0.79 0.76 0.73 0.74 0.75 Table 4: Results of opinion targets extraction on Large and COAE08. Methods MP3 Hotel Camera Car Avg. P R F P R F P R F P R F F Hu 0.48 0.65 0.55 0.51 0.68 0.58 0.72 0.74 0.73 0.70 0.71 0.70 0.64 DP 0.58 0.62 0.60 0.60 0.66 0.63 0.80 0.73 0.76 0.79 0.71 0.75 0.68 Ours-Stage1 0.59 0.69 0.64 0.61 0.71 0.66 0.79 0.78 0.78 0.77 0.77 0.77 0.71 Ours-Full 0.64 0.67 0.65 0.67 0.69 0.68 0.82 0.78 0.80 0.80 0.76 0.78 0.73 Table 5: Results of opinion words extraction on Large and COAE08. Figure 3: Opinion target extraction results. 5.3.1 The Effect of Sentiment Graph Walking We can see that our graph-based methods (OursBigraph and Ours-Stage1) achieve higher recall than Zhang. By learning patterns automatically, our method captures opinion relations more efficiently. Also, Ours-Stage1 outperforms OursBigraph, especially in precision. We believe it is because Ours-Stage1 estimated confidence of patterns so false opinion relations are reduced. Therefore, the consideration of pattern confidence is beneficial as expected, which alleviates the false opinion relation problem. On another hand, we find that Ours-Stage2 has much worse performance than Ours-Full. This shows the effectiveness of Sentiment Graph Walking algorithm since the confidence scores estimated in the first stage are indispensable and indeed key to the learning of the second stage. 5.3.2 The Effect of Self-Learning Figure 4 shows the average Precision@N curve of four domains on opinion target extraction. OursGN-Only is implemented by only removing 50 initial negative examples found by our GN corpora. We can see that the GN corpora work quite well, which find out most top-ranked false opinion targets. At the same time, Ours-Full has much better performance than Ours-GN-Only which indicates that Ours-Full can filter out more noises other than the initial negative examples. Therefore, our self-learning strategy alleviates the shortcoming of false opinion target problem. Moreover, Table 5 shows that the performance of opinion word extraction is also improved based on the classified results of opinion targets. Figure 4: The average precision@N curve of the four domains on opinion target extraction. 1770 ID Pattern Example #Ext. Conf. PrO PrT #1 <OC>{mod}<TC> it has a clear screen 7344 0.3938 0.59 0.66 #2 <TC>{subj}<OC> the sound quality is excellent 2791 0.0689 0.62 0.70 #3 <TC>{conj}<TC> the size and weight make it convenient 3620 0.0208 N/A 0.67 #4 <TC>{subj}<TC> the button layout is a simplistic plus 1615 0.0096 N/A 0.67 #5 <OC>{pnmod}<TC> the buttons easier to use 128 0.0014 0.61 0.34 #6 <TC>{subj}(V){s}(VBE){subj}<OC> software provided is simple 189 0.0015 0.54 0.33 #7 <OC>{mod}(Prep){pcomp-c}(V){obj}<TC> great for playing audible books 211 0.0013 0.43 0.48 Table 6: Examples of English patterns. #Ext. represent number of terms extracted, Conf. denotes confidence score estimated by RWR and PrO/PrT stand for precisions of extraction on opinion words/targets of a pattern respectively. Opinion words in examples are in bold and opinion targets are in italic. Figure 5 gives the recall of long-tail opinion targets5 extracted, where Ours-Full is shown to have much better performance than Ours-Stage1 and the three competitors. This observation proves that our method can improve the limitation of long-tail opinion target problem. Figure 5: The recall of long-tail opinion targets. 5.3.3 Analysis on Opinion Patterns Table 6 shows some examples of opinion pattern and their extraction accuracy on MP3 reviews in the first stage. Pattern #1 and #2 are the two most high-confidence opinion patterns of “OCTC” type, and Pattern #3 and #4 demonstrate two typical “TC-TC” patterns. As these patterns extract too many terms, the overall precision is very low. We give Precision@400 of them, which is more meaningful because only top listed terms in the extracted results are regarded as opinion targets. Pattern #5 and #6 have high precision on opinion words but low precision on opinion targets. This observation demonstrates the false opinion target problem. Pattern #7 is a pattern example that extracts many false opinion relations and it has low precision for both opinion words and opinion targets. We can see that Pattern #7 has 5Since there is no explicit definition for the notion “longtail”, we conservatively regard 60% opinion targets with the lowest frequency as the “long-tail” terms. a lower confidence compared with Pattern #5 and #6 although it extracts more words. It’s because it has a low probability of walking from opinion seeds to this pattern. This further proves that our method can reduce the confidence of low-quality patterns. 5.3.4 Sensitivity of Parameters Finally, we study the sensitivity of parameters when recall is fixed at 0.70. Figure 6 shows the precision curves at different N initial training examples and F filtering frequency. We can see that the performance saturates when N is set to 50 and it does not vary much under different F, showing the robustness of our method. We thus set N to 50, and F to 3 for CRD, 5 for COAE08 and 10 for Large accordingly. Figure 6: Influence of parameters. 1771 6 Conclusion and Future Work This paper proposes a novel two-stage framework for mining opinion words and opinion targets. In the first stage, we propose a Sentiment Graph Walking algorithm, which incorporates syntactic patterns in a Sentiment Graph to improve the extraction performance. In the second stage, we propose a self-learning method to refine the result of first stage. The experimental results show that our method achieves superior performance over stateof-the-art unsupervised methods. We further notice that opinion words are not limited to adjectives but can also be other type of word such as verbs or nouns. Identifying all kinds of opinion words is a more challenging task. We plan to study this problem in our future work. Acknowledgement Thanks to Prof. Yulan He for her insightful advices. This work was supported by the National Natural Science Foundation of China (No. 61070106, No. 61272332 and No. 61202329), the National High Technology Development 863 Program of China (No. 2012AA011102), the National Basic Research Program of China (No. 2012CB316300), Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research (ICDD201201). References Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative reordering with chinese grammatical relations features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation, SSST ’09, pages 51–59. Sajib Dasgupta and Vincent Ng. 2009. Mine the easy, classify the hard: a semi-supervised approach to automatic sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2, ACL ’09, pages 701–709. Kok Wee Gan and Ping Wai Wong. 2000. Annotating information structures in chinese texts using hownet. In Proceedings of the second workshop on Chinese language processing: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 12, CLPW ’00, pages 85–92, Stroudsburg, PA, USA. Association for Computational Linguistics. Ahmed Hassan and Dragomir Radev. 2010. Identifying text polarity using random walks. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 395– 403, Stroudsburg, PA, USA. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’04, pages 168–177, New York, NY, USA. ACM. Valentin Jijkoun, Maarten de Rijke, and Wouter Weerkamp. 2010. Generating focused topicspecific sentiment lexicons. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 585–594, Stroudsburg, PA, USA. Association for Computational Linguistics. Wei Jin and Hung Hay Ho. 2009. A novel lexicalized hmm-based learning framework for web opinion mining. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 465–472. Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proceedings of the Sixteenth International Conference on Machine Learning, pages 200–209. Jason Kessler and Nicolas Nicolov. 2009. Targeting sentiment expressions through supervised ranking of linguistic configurations. In Proceedings of the Third International AAAI Conference on Weblogs and Social Media. Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604–632, September. Nozomi Kobayashi, Kentaro Inui, and Yuji Matsumoto. 2007. Extracting aspect-evaluation and aspectof relations in opinion mining. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 1065–1074, June. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Ying-Ju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 653–661, Stroudsburg, PA, USA. Association for Computational Linguistics. Fangtao Li, Sinno Jialin Pan, Ou Jin, Qiang Yang, and Xiaoyan Zhu. 2012. Cross-domain co-extraction of sentiment and topic lexicons. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 410–419, July. 1772 Dekang Lin. 1998. Dependency-based evaluation of minipar. In Workshop on Evaluation of Parsing Systems at ICLRE. Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1346–1356, Stroudsburg, PA, USA. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: a lexical database for english. Commun. ACM, 38(11):39–41. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 339–346. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In Proceedings of the 21st international jont conference on Artifical intelligence, IJCAI’09, pages 1199–1204. Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proceedings of the 2003 conference on Empirical methods in natural language processing, EMNLP ’03, pages 105–112, Stroudsburg, PA, USA. Association for Computational Linguistics. Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect keyword supervision. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’11, pages 618– 626, New York, NY, USA. ACM. Janyce Wiebe and Ellen Riloff. 2005. Creating subjective and objective sentence classifiers from unannotated texts. In Proceedings of the 6th international conference on Computational Linguistics and Intelligent Text Processing, CICLing’05, pages 486–497. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3, pages 1533–1541. Liheng Xu, Kang Liu, Siwei Lai, Yubo Chen, and Jun Zhao. 2013. Walk and learn: A two-stage approach for opinion words and opinion targets co-extraction. In Proceedings of the 22nd International World Wide Web Conference, WWW ’13. Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1462–1470. Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM ’09, pages 1799–1802. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM international conference on Information and knowledge management, CIKM ’06, pages 43–50. 1773
2013
173
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1774–1784, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning Song Feng Jun Seok Kang Polina Kuznetsova Yejin Choi Department of Computer Science Stony Brook University Stony Brook, NY 11794-4400 songfeng, junkang, pkuznetsova, [email protected] Abstract Understanding the connotation of words plays an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text, as seemingly objective statements often allude nuanced sentiment of the writer, and even purposefully conjure emotion from the readers’ minds. The focus of this paper is drawing nuanced, connotative sentiments from even those words that are objective on the surface, such as “intelligence”, “human”, and “cheesecake”. We propose induction algorithms encoding a diverse set of linguistic insights (semantic prosody, distributional similarity, semantic parallelism of coordination) and prior knowledge drawn from lexical resources, resulting in the first broad-coverage connotation lexicon. 1 Introduction There has been a substantial body of research in sentiment analysis over the last decade (Pang and Lee, 2008), where a considerable amount of work has focused on recognizing sentiment that is generally explicit and pronounced rather than implied and subdued. However in many real-world texts, even seemingly objective statements can be opinion-laden in that they often allude nuanced sentiment of the writer (Greene and Resnik, 2009), or purposefully conjure emotion from the readers’ minds (Mohammad and Turney, 2010). Although some researchers have explored formal and statistical treatments of those implicit and implied sentiments (e.g. Wiebe et al. (2005), Esuli and Sebastiani (2006), Greene and Resnik (2009), Davidov et al. (2010)), automatic analysis of them largely remains as a big challenge. In this paper, we concentrate on understanding the connotative sentiments of words, as they play an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text. For instance, consider the following: Geothermal replaces oil-heating; it helps reducing greenhouse emissions.1 Although this sentence could be considered as a factual statement from the general standpoint, the subtle effect of this sentence may not be entirely objective: this sentence is likely to have an influence on readers’ minds in regard to their opinion toward “geothermal”. In order to sense the subtle overtone of sentiments, one needs to know that the word “emissions” has generally negative connotation, which geothermal reduces. In fact, depending on the pragmatic contexts, it could be precisely the intention of the author to transfer his opinion into the readers’ minds. The main contribution of this paper is a broadcoverage connotation lexicon that determines the connotative polarity of even those words with ever so subtle connotation beneath their surface meaning, such as “Literature”, “Mediterranean”, and “wine”. Although there has been a number of previous work that constructed sentiment lexicons (e.g., Esuli and Sebastiani (2006), Wilson et al. (2005a), Kaji and Kitsuregawa (2007), Qiu et al. (2009)), which seem to be increasingly and inevitably expanding over words with (strongly) connotative sentiments rather than explicit sentiments alone (e.g., “gun”), little prior work has directly tackled this problem of learning connotation,2 and much of the subtle connotation of many seemingly objective words is yet to be determined. 1Our learned lexicon correctly assigns negative polarity to emission. 2A notable exception would be the work of Feng et al. 1774 POSITIVE NEGATIVE FEMA, Mandela, Intel, Google, Python, Sony, Pulitzer, Harvard, Duke, Einstein, Shakespeare, Elizabeth, Clooney, Hoover, Goldman, Swarovski, Hawaii, Yellowstone Katrina, Monsanto, Halliburton, Enron, Teflon, Hiroshima, Holocaust, Afghanistan, Mugabe, Hutu, Saddam, Osama, Qaeda, Kosovo, Helicobacter, HIV Table 1: Example Named Entities (Proper Nouns) with Polar Connotation. A central premise to our approach is that it is collocational statistics of words that affect and shape the polarity of connotation. Indeed, the etymology of “connotation” is from the Latin “com” (“together or with”) and “notare” (“to mark”). It is important to clarify, however, that we do not simply assume that words that collocate share the same polarity of connotation. Although such an assumption played a key role in previous work for the analogous task of learning sentiment lexicon (Velikovich et al., 2010), we expect that the same assumption would be less reliable in drawing subtle connotative sentiments of words. As one example, the predicate “cure”, which has a positive connotation typically takes arguments with negative connotation, e.g., “disease”, when used as the “relieve” sense.3 Therefore, in order to attain a broad coverage lexicon while maintaining good precision, we guide the induction algorithm with multiple, carefully selected linguistic insights: [1] distributional similarity, [2] semantic parallelism of coordination, [3] selectional preference, and [4] semantic prosody (e.g., Sinclair (1991), Louw (1993), Stubbs (1995), Stefanowitsch and Gries (2003))), and also exploit existing lexical resources as an additional inductive bias. We cast the connotation lexicon induction task as a collective inference problem, and consider approaches based on three distinct types of algorithmic framework that have been shown successful for conventional sentiment lexicon induction: Random walk based on HITS/PageRank (e.g., Kleinberg (1999), Page et al. (1999), Feng et al. (2011) Heerschop et al. (2011), Montejo-R´aez et al. (2012)) Label/Graph propagation (e.g., Zhu and Ghahra(2011) but with practical limitations. See §3 for detailed discussion. 3Note that when “cure” is used as the “preserve” sense, it expects objects with non-negative connotation. Hence wordsense-disambiguation (WSD) presents a challenge, though not unexpectedly. In this work, we assume the general connotation of each word over statistically prevailing senses, leaving a more cautious handling of WSD as future work. mani (2002), Velikovich et al. (2010)) Constraint optimization (e.g., Roth and Yih (2004), Choi and Cardie (2009), Lu et al. (2011)). We provide comparative empirical results over several variants of these approaches with comprehensive evaluations including lexicon-based, human judgments, and extrinsic evaluations. It is worthwhile to note that not all words have connotative meanings that are distinct from denotational meanings, and in some cases, it can be difficult to determine whether the overall sentiment is drawn from denotational or connotative meanings exclusively, or both. Therefore, we encompass any sentiment from either type of meanings into the lexicon, where non-neutral polarity prevails over neutral one if some meanings lead to neutral while others to non-neutral.4 Our work results in the first broad-coverage connotation lexicon,5 significantly improving both the coverage and the precision of Feng et al. (2011). As an interesting by-product, our algorithm can be also used as a proxy to measure the general connotation of real-world named entities based on their collocational statistics. Table 1 highlights some example proper nouns included in the final lexicon. The rest of the paper is structured as follows. In §2 we describe three types of induction algorithms followed by evaluation in §3. Then we revisit the induction algorithms based on constraint optimization in §4 to enhance quality and scalability. §5 presents comprehensive evaluation with human judges and extrinsic evaluations. Related work and conclusion are in §6 and §7. 4In general, polysemous words do not seem to have conflicting non-neutral polarities over different senses, though there are many exceptions, e.g., “heat”, or “fine”. We treat each word in each part-of-speech as a separate word to reduce such cases, otherwise aim to learn the most prevalent polarity in the corpus with respect to each part-of-speech of each word. 5Available at http://www.cs.stonybrook.edu/ ˜ychoi/connotation. 1775 … Pred-Arg Arg-Arg pred-arg distr sim enjoy thank writing profit help investment aid reading Figure 1: Graph for Graph Propagation (§2.2). … … synonyms antonyms prevent suffer enjoy thank tax loss writing profit preventing gain investment bonus pred-arg distr sim flu cold Figure 2: Graph for ILP/LP (§2.3, §4.2). 2 Connotation Induction Algorithms We develop induction algorithms based on three distinct types of algorithmic framework that have been shown successful for the analogous task of sentiment lexicon induction: HITS & PageRank (§2.1), Label/Graph Propagation (§2.2), and Constraint Optimization via Integer Linear Programming (§2.3). As will be shown, each of these approaches will incorporate additional, more diverse linguistic insights. 2.1 HITS & PageRank The work of Feng et al. (2011) explored the use of HITS (Kleinberg, 1999) and PageRank (Page et al., 1999) to induce the general connotation of words hinging on the linguistic phenomena of selectional preference and semantic prosody, i.e., connotative predicates influencing the connotation of their arguments. For example, the object of a negative connotative predicate “cure” is likely to have negative connotation, e.g., “disease” or “cancer”. The bipartite graph structure for this approach corresponds to the left-most box (labeled as “pred-arg”) in Figure 1. 2.2 Label Propagation With the goal of obtaining a broad-coverage lexicon in mind, we find that relying only on the structure of semantic prosody is limiting, due to relatively small sets of connotative predicates available.6 Therefore, we extend the graph structure as an overlay of two sub-graphs (Figure 1) as described below: 6For connotative predicates, we use the seed predicate set of Feng et al. (2011), which comprises of 20 positive and 20 negative predicates. Sub-graph #1: Predicate–Argument Graph This sub-graph is the bipartite graph that encodes the selectional preference of connotative predicates over their arguments. In this graph, connotative predicates p reside on one side of the graph and their co-occurring arguments a reside on the other side of the graph based on Google Web 1T corpus.7 The weight on the edges between the predicates p and arguments a are defined using Point-wise Mutual Information (PMI) as follows: w(p →a) := PMI(p, a) = log2 P(p, a) P(p)P(a) PMI scores have been widely used in previous studies to measure association between words (e.g., Turney (2001), Church and Hanks (1990)). Sub-graph #2: Argument–Argument Graph The second sub-graph is based on the distributional similarities among the arguments. One possible way of constructing such a graph is simply connecting all nodes and assign edge weights proportionate to the word association scores, such as PMI, or distributional similarity. However, such a completely connected graph can be susceptible to propagating noise, and does not scale well over a very large set of vocabulary. We therefore reduce the graph connectivity by exploiting semantic parallelism of coordination (Bock (1986), Hatzivassiloglou and McKeown 7We restrict predicte-argument pairs to verb-object pairs in this study. Note that Google Web 1T dataset consists of n-grams upto n = 5. Since n-gram sequences are too short to apply a parser, we extract verb-object pairs approximately by matching part-of-speech tags. Empirically, when overlaid with the second sub-graph, we found that it is better to keep the connectivity of this sub-graph as uni-directional. That is, we only allow edges to go from a predicate to an argument. 1776 POSITIVE NEGATIVE NEUTRAL n. avatar, adrenaline, keynote, debut, stakeholder, sunshine, cooperation unbeliever, delay, shortfall, gunshot, misdemeanor, mutiny, rigor header, mark, clothing, outline, grid, gasoline, course, preview v. handcraft, volunteer, party, accredit, personalize, nurse, google sentence, cough, trap, scratch, debunk, rip, misspell, overcharge state, edit, send, put, arrive, type, drill, name, stay, echo, register a. floral, vegetarian, prepared, ageless, funded, contemporary debilitating, impaired, swollen, intentional, jarring, unearned same, cerebral, west, uncut, automatic, hydrated, unheated, routine Table 2: Example Words with Learned Connotation: Nouns(n), Verbs(v), Adjectives(a). (1997), Pickering and Branigan (1998)). In particular, we consider an undirected edge between a pair of arguments a1 and a2 only if they occurred together in the “a1 and a2” or “a2 and a1” coordination, and assign edge weights as: w(a1 −a2) = CosineSim(−→ a1, −→ a2) = −→ a1 · −→ a2 ||−→ a1|| ||−→ a2|| where −→ a1 and −→ a2 are co-occurrence vectors for a1 and a2 respectively. The co-occurrence vector for each word is computed using PMI scores with respect to the top n co-occurring words.8 n (=50) is selected empirically. The edge weights in two sub-graphs are normalized so that they are in the comparable range.9 Limitations of Graph-based Algorithms Although graph-based algorithms (§2.1, §2.2) provide an intuitive framework to incorporate various lexical relations, limitations include: 1. They allow only non-negative edge weights. Therefore, we can encode only positive (supportive) relations among words (e.g., distributionally similar words will endorse each other with the same polarity), while missing on exploiting negative relations (e.g., antonyms may drive each other into the opposite polarity). 2. They induce positive and negative polarities in isolation via separate graphs. However, we expect that a more effective algorithm should induce both polarities simultaneously. 3. The framework does not readily allow incorporating a diverse set of soft and hard constraints. 8We discard edges with cosine similarity ≤0, as those indicate either independence or the opposite of similarity. 9Note that cosine similarity does not make sense for the first sub-graph as there is no reason why a predicate and an argument should be distributionally similar. We experimented with many different variations on the graph structure and edge weights, including ones that include any word pairs that occurred frequently enough together. For brevity, we present the version that achieved the best results here. 2.3 Constraint Optimization Addressing limitations of graph-based algorithms (§2.2), we propose an induction algorithm based on Integer Linear Programming (ILP). Figure 2 provides the pictorial overview. In comparison to Figure 1, two new components are: (1) dictionarydriven relations targeting enhanced precision, and (2) dictionary-driven words (i.e., unseen words with respect to those relations explored in Figure 1) targeting enhanced coverage. We formulate insights in Figure 2 using ILP as follows: Definition of sets of words: 1. P+: the set of positive seed predicates. P−: the set of negative seed predicates. 2. S: the set of seed sentiment words. 3. Rsyn: word pairs in synonyms relation. Rant: word pairs in antonyms relation. Rcoord: word pairs in coordination relation. Rpred: word pairs in pred-arg relation. Rpred+(−): Rpred based on P+ (P−). Definition of variables: For each word i, we define binary variables xi, yi, zi ∈{0, 1}, where xi = 1 (yi = 1, zi = 1) if and only if i has a positive (negative, neutral) connotation respectively. For every pair of word i and j, we define binary variables dpq ij where p, q ∈{+, −, 0} and dpq ij = 1 if and only if the polarity of i and j are p and q respectively. Objective function: We aim to maximize: F = Φprosody + Φcoord + Φneu where Φprosody is the scores based on semantic prosody, Φcoord captures the distributional similarity over coordination, and Φneu controls the sensitivity of connotation detection between positive (negative) and neutral. In particular, Φprosody = Rpred X i,j wpred i,j (d++ i,j + d−− i,j −d+− i,j −d−+ i,j ) Φcoord = Rcoord X i,j wcoord i,j (d++ i,j + d−− i,j + d00 i,j) 1777 Φneu = α Rpred X i,j wpred i,j · zj Soft constraints (edge weights): The weights in the objective function are set as follows: wpred(p, a) = freq(p, a) P (p,x)∈Rpred freq(p, x) wcoord(a1, a2) = CosSim(−→ a1, −→ a2) = −→ a1 · −→ a2 ||−→ a1|| ||−→ a2|| Note that the same wcoord(a1, a2) has been used in graph propagation described in Section 2.2. α controls the sensitivity of connotation detection such that higher value of α will promote neutral connotation over polar ones. Hard constrains for variable consistency: 1. Each word i has one of {+, −, ø} as polarity: ∀i, xi + yi + zi = 1 2. Variable consistency between dpq ij and xi, yi, zi: xi + xj −1 ≤2d++ i,j ≤ xi + xj yi + yj −1 ≤2d−− i,j ≤ yi + yj zi + zj −1 ≤2d00 i,j ≤ zi + zj xi + yj −1 ≤2d+− i,j ≤ xi + yj yi + xj −1 ≤2d−+ i,j ≤ yi + xj Hard constrains for WordNet relations: 1. Cant: Antonym pairs will not have the same positive or negative polarity: ∀(i, j) ∈Rant, xi + xj ≤1, yi + yj ≤1 For this constraint, we only consider antonym pairs that share the same root, e.g., “sufficient” and “insufficient”, as those pairs are more likely to have the opposite polarities than pairs without sharing the same root, e.g., “east” and “west”. 2. Csyn: Synonym pairs will not have the opposite polarity: ∀(i, j) ∈Rsyn, xi + yj ≤1, xj + yi ≤1 3 Experimental Result I We provide comprehensive comparisons over variants of three types of algorithms proposed in §2. We use the Google Web 1T data (Brants and Franz (2006)), and POS-tagged ngrams using Stanford POS Tagger (Toutanova and Manning (2000)). We filter out the ngrams with punctuations and other special characters to reduce the noise. 3.1 Comparison against Conventional Sentiment Lexicon Note that we consider the connotation lexicon to be inclusive of a sentiment lexicon for two practical reasons: first, it is highly unlikely that any word with non-neutral sentiment (i.e., positive or negative) would carry connotation of the opposite, i.e., conflicting10 polarity. Second, for some words with distinct sentiment or strong connotation, it can be difficult or even unnatural to draw a precise distinction between connotation and sentiment, e.g., “efficient”. Therefore, sentiment lexicons can serve as a surrogate to measure a subset of connotation words induced by the algorithms, as shown in Table 3 with respect to General Inquirer (Stone and Hunt (1963)) and MPQA (Wilson et al. (2005b)).11 Discussion Table 3 shows the agreement statistics with respect to two conventional sentiment lexicons. We find that the use of label propagation alone [PRED-ARG (CP)] improves the performance substantially over the comparable graph construction with different graph analysis algorithms, in particular, HITS and PageRank approaches of Feng et al. (2011). The two completely connected variants of the graph propagation on the Pred-Arg graph, [N PRED-ARG (PMI)] and [N PRED-ARG (CP)], do not necessarily improve the performance over the simpler and computationally lighter alternative, [PREDARG (CP)]. The [OVERLAY], which is based on both Pred-Arg and Arg-Arg subgraphs (§2.2), achieves the best performance among graph-based algorithms, significantly improving the precision over all other baselines. This result suggests: 1 The sub-graph #2, based on the semantic parallelism of coordination, is simple and yet very powerful as an inductive bias. 2 The performance of graph propagation varies significantly depending on the graph topology and the corresponding edge weights. Note that a direct comparison against ILP for top N words is tricky, as ILP does not rank results. Only for comparison purposes however, we assign 10We consider “positive” and “negative” polarities conflict, but “neutral” polarity does not conflict with any. 11In the case of General Inquirer, we use words in POSITIV and NEGATIV sets as words with positive and negative labels respectively. 1778 GENINQ EVAL MPQA EVAL 100 1,000 5,000 10,000 ALL 100 1,000 5,000 10,000 ALL ILP 97.6 94.5 84.5 80.8 80.4 98.0 89.7 84.6 81.2 78.4 OVERLAY 97.0 95.1 78.8 (78.3) 78.3 98.0 93.4 82.1 77.7 77.7 N PRED-ARG (PMI) 91.0 91.4 76.1 (76.1) 76.1 88.0 89.1 78.8 75.1 75.1 NPRED-ARG (CP) 88.0 85.4 76.2 (76.2) 76.2 87.0 82.6 78.0 76.3 76.3 PRED-ARG (CP) 91.0 91.0 81.0 (81.0) 81.0 88.0 91.5 80.0 78.3 78.3 HITS-ASYMT 77.0 68.8 66.5 86.3 81.3 72.2 PAGERANK-ASYMF 77.0 68.5 65.7 87.2 80.3 72.3 Table 3: Evaluation of Induction Algorithms (§2) with respect to Sentiment Lexicons (precision%). ranks based on the frequency of words for ILP. Because of this issue, the performance of top ∼1k words of ILP should be considered only as a conservative measure. Importantly, when evaluated over more than top 5k words, ILP is overall the top performer considering both precision (shown in Table 3) and coverage (omitted for brevity).12 4 Precision, Coverage, and Efficiency In this section, we address three important aspects of an ideal induction algorithm: precision, coverage, and efficiency. For brevity, the remainder of the paper will focus on the algorithms based on constraint optimization, as it turned out to be the most effective one from the empirical results in §3. Precision In order to see the effectiveness of the induction algorithms more sharply, we had used a limited set of seed words in §3. However to build a lexicon with substantially enhanced precision, we will use as a large seed set as possible, e.g., entire sentiment lexicons13. Broad coverage Although statistics in Google 1T corpus represent a very large amount of text, words that appear in pred-arg and coordination relations are still limited. To substantially increase the coverage, we will leverage dictionary words (that are not in the corpus) as described in §2.3 and Figure 2. Efficiency One practical problem with ILP is efficiency and scalability. In particular, we found that it becomes nearly impractical to run the ILP formulation including all words in WordNet plus all words in the argument position in Google Web 1T. We therefore explore an alternative approach based on Linear Programming in what follows. 12In fact, the performance of PRED-ARG variants for top 10K w.r.t. GENINQ is not meaningful as no additional word was matched beyond top 5k words. 13Note that doing so will prevent us from evaluating against the same sentiment lexicon used as a seed set. 4.1 Induction using Linear Programming One straightforward option for Linear Programming formulation may seem like using the same Integer Linear Programming formulation introduced in §2.3, only changing the variable definitions to be real values ∈[0, 1] rather than integers. However, because the hard constraints in §2.3 are defined based on the assumption that all the variables are binary integers, those constraints are not as meaningful when considered for real numbers. Therefore we revise those hard constraints to encode various semantic relations (WordNet and semantic coordination) more directly. Definition of variables: For each word i, we define variables xi, yi, zi ∈[0, 1]. i has a positive (negative) connotation if and only if the xi (yi) is assigned the greatest value among the three variables; otherwise, i is neutral. Objective function: We aim to maximize: F = Φprosody + Φcoord + Φsyn + Φant + Φneu Φprosody = Rpred+ X i,j wpred+ i,j · xj + Rpred− X i,j wpred− i,j · yj Φcoord = Rcoord X i,j wcoord i,j · (dc++ i,j + dc−− i,j ) Φsyn = W syn Rsyn X i,j (ds++ i,j + ds−− i,j ) Φant = W ant Rant X i,j (da++ i,j + da−− i,j ) Φneu = α Rpred X i,j wpred i,j · zj Hard constraints We add penalties to the objective function if the polarity of a pair of words is not consistent with its corresponding semantic relations. For example, for synonyms i and j, we introduce a penalty W syn (a positive constant) for ds++ i,j , ds−− i,j ∈[−1, 0], where we set the upper bound of ds++ i,j (ds−− i,j ) as the signed distance of 1779 FORMULA POSITIVE NEGATIVE ALL R P F R P F R P F ILP Φprosody + Csyn + Cant 51.4 85.7 64.3 44.7 87.9 59.3 48.0 86.8 61.8 Φprosody + Csyn + Cant + CS 61.2 93.3 73.9 52.4 92.2 66.8 56.8 92.8 70.5 Φprosody + Φcoord + Csyn + Cant 67.3 75.0 70.9 53.7 84.4 65.6 60.5 79.7 68.8 Φprosody + Φcoord + Csyn + Cant + CS 62.2 96.0 75.5 51.5 89.5 65.4 56.9 92.8 70.5 LP Φprosody + Φsyn + Φant 24.4 76.0 36.9 23.6 78.8 36.3 24.0 77.4 36.6 Φprosody + Φsyn + Φant + ΦS 71.6 87.8 78.9 68.8 84.6 75.9 70.2 86.2 77.4 Φprosody + Φcoord + Φsyn + Φant 67.9 92.6 78.3 64.6 89.1 74.9 66.3 90.8 76.6 Φprosody + Φcoord + Φsyn + Φant + ΦS 78.6 90.5 84.1 73.3 87.1 79.6 75.9 88.8 81.8 Table 4: ILP/LP Comparison on MQPA′ (%). xi and xj (yi and yj) as shown below: For (i, j) ∈Rsyn, ds++ i,j ≤xi −xj, ds++ i,j ≤xj −xi ds−− i,j ≤yi −yj, ds−− i,j ≤yj −yi Notice that ds++ i,j , ds−− i,j satisfying above inequalities will be always of negative values, hence in order to maximize the objective function, the LP solver will try to minimize the absolute values of ds++ i,j , ds−− i,j , effectively pushing i and j toward the same polarity. Constraints for semantic coordination Rcoord can be defined similarly. Lastly, following constraints encode antonym relations: For (i, j) ∈Rant , da++ i,j ≤xi −(1 −xj), da++ i,j ≤(1 −xj) −xi da−− i,j ≤yi −(1 −yj), da−− i,j ≤(1 −yj) −yi Interpretation Unlike ILP, some of the variables result in fractional values. We consider a word has positive or negative polarity only if the assignment indicates 1 for the corresponding polarity and 0 for the rest. In other words, we treat all words with fractional assignments over different polarities as neutral. Because the optimal solutions of LP correspond to extreme points in the convex polytope formed by the constraints, we obtain a large portion of words with non-fractional assignments toward non-neutral polarities. Alternatively, one can round up fractional values. 4.2 Empirical Comparisons: ILP v.s. LP To solve the ILP/LP, we run ILOG CPLEX Optimizer (CPLEX, 2009)) on a 3.5GHz 6 core CPU machine with 96GB RAM. Efficiency-wise, LP runs within 10 minutes while ILP takes several hours. Table 4 shows the results evaluated against MPQA for different variations of ILP and LP. We find that LP variants much better recall and F-score, while maintaining comparable precision. Therefore, we choose the connotation lexicon by LP (C-LP) in the following evaluations in §5. 5 Experimental Results II In this section, we present comprehensive intrinsic §5.1 and extrinsic §5.2 evaluations comparing three representative lexicons from §2 & §4: CLP, OVERLAY, PRED-ARG (CP), and two popular sentiment lexicons: SentiWordNet (Baccianella et al., 2010) and GI+MPQA.14 Note that C-LP is the largest among all connotation lexicons, including ∼70,000 polar words.15 5.1 Intrinsic Evaluation: Human Judgements We evaluate 4000 words16 using Amazon Mechanical Turk (AMT). Because we expect that judging a connotation can be dependent on one’s cultural background, personality and value systems, we gather judgements from 5 people for each word, from which we hope to draw a more general judgement of connotative polarity. About 300 unique Turkers participated the evaluation tasks. We gather gold standard only for those words for which more than half of the judges agreed on the same polarity. Otherwise we treat them as ambiguous cases.17 Figure 3 shows a part of the AMT task, where Turkers are presented with questions that help judges to determine the subtle connotative polarity of each word, then asked to rate the degree of connotation on a scale from 5 (most negative) and 5 (most positive). To draw 14GI+MPQA is the union of General Inquirer and MPQA. The GI, we use words in the “Positiv” & “Negativ” set. For SentiWordNet, to retrieve the polarity of a given word, we sum over the polarity scores over all senses, where positive (negative) values correspond to positive (negative) polarity. 15∼13k adj, ∼6k verbs, ∼28k nouns, ∼22k proper nouns. 16We choose words that are not already in GI+MPQA and obtain most frequent 10,000 words based on the unigram frequency in Google-Ngram, then randomly select 4000 words. 17We allow Turkers to mark words that can be used with both positive and negative connotation, which results in about 7% of words that are excluded from the gold standard set. 1780 Figure 3: A Part of AMT Task Design. YES NO QUESTION % Avg % Avg “Enjoyable or pleasant” 43.3 2.9 16.3 -2.4 “Of a good quality” 56.7 2.5 6.1 -2.7 “Respectable / honourable” 21.0 3.3 14.0 -1.1 “Would like to do or have” 52.5 2.8 11.5 -2.4 Table 5: Distribution of Answers from AMT. the gold standard, we consider two different voting schemes: • ΩV ote: The judgement of each Turker is mapped to neutral for −1 ≤score ≤1, positive for score ≥2, negative for score ≤2, then we take the majority vote. • ΩScore: Let σ(i) be the sum (weighted vote) of the scores given by 5 judges for word i. Then we determine the polarity label l(i) of i as: l(i) =    positive if σ(i) > 1 negative if σ(i) < −1 neutral if −1 ≤σ(i) ≤1 The resulting distribution of judgements is shown in Table 5 & 6. Interestingly, we observe that among the relatively frequently used English words, there are overwhelmingly more positively connotative words than negative ones. In Table 7, we show the percentage of words with the same label over the mutual words by the two lexicon. The highest agreement is 77% by C-LP and the gold standard by AMTV ote. How good is this? It depends on what is the natural degree of agreement over subtle connotation among people. Therefore, we also report the degree of agreement among human judges in Table 7, where we compute the agreement of one Turker with respect to the gold standard drawn from the rest of the Turkers, and take the average across over all five Turkers18. Interestingly, the performance of 18In order to draw the gold standard from the 4 remaining Turkers, we consider adjusted versions of ΩV ote and ΩScore schemes described above. POS NEG NEU UNDETERMINED ΩV ote 50.4 14.6 24.1 10.9 ΩScore 67.9 20.6 11.5 n/a Table 6: Distribution of Connotative Polarity from AMT. C-LP SENTIWN HUMAN JUDGES ΩV ote 77.0 71.5 66.0 ΩScore 73.0 69.0 69.0 Table 7: Agreement (Accuracy) against AMTdriven Gold Standard. Turkers is not as good as that of C-LP lexicon. We conjecture that this could be due to generally varying perception of different people on the connotative polarity,19 while the corpus-driven induction algorithms focus on the general connotative polarity corresponding to the most prevalent senses of words in the corpus. 5.2 Extrinsic Evaluation We conduct lexicon-based binary sentiment classification on the following two corpora. SemEval From the SemEval task, we obtain a set of news headlines with annotated scores (ranging from -100 to 87). The positive/negative scores indicate the degree of positive/negative polarity orientation. We construct several sets of the positive and negative texts by setting thresholds on the scores as shown in Table 8. “≶n” indicates that the positive set consists of the texts with scores ≥n and the negative set consists of the texts with scores ≤−n. Emoticon tweets The sentiment Twitter data20 consists of tweets containing either a smiley emoticon (positive sentiment) or a frowny emoticon (negative sentiment). We filter out the tweets with question marks or more than 30 words, and keep the ones with at least two words in the union of all polar words in the five lexicons in Table 8, and then randomly select 10000 per class. We denote the short text (e.g., content of tweets or headline texts from SemEval) by t. w represents the word in t. W +/W −is the set of posi19Pearson correlation coefficient among turkers is 0.28, which corresponds to a positive small to medium correlation. Note that when the annotation of turkers is aggregated, we observe agreement as high as 77% with respect to the learned connotation lexicon. 20http://www.stanford.edu/˜alecmgo/ cs224n/twitterdata.2009.05.25.c.zip 1781 DATA LEXICON TWEET SEMEVAL ≶20 ≶40 ≶60 ≶80 C-LP 70.1 70.8 74.6 80.8 93.5 OVERLAY 68.5 70.0 72.9 76.8 89.6 PRED-ARG (CP) 60.5 64.2 69.3 70.3 79.2 SENTIWN 67.4 61.0 64.5 70.5 79.0 GI+MPQA 65.0 64.5 69.0 74.0 80.5 Table 8: Accuracy on Sentiment Classification (%). tive/negative words of the lexicon. We define the weight of w as s(w). If w is adjective, s(w) = 2; otherwise s(w) = 1. Then the polarity of each text is determined as follows: pol(t) =          positive if W + P w∈t s(w) ≥ W − P w∈t s(w) negative if W + P w∈t s(w) < W − P w∈t s(w) As shown in Table 8, C-LP generally performs better than the other lexicons on both corpora. Considering that only very simple classification strategy is applied, the result by the connotation lexicon is quite promising. Finally, Table 1 highlights interesting examples of proper nouns with connotative polarity, e.g., “Mandela”, “Google”, “Hawaii” with positive connotation, and “Monsanto”, “Halliburton”, “Enron” with negative connotation, suggesting that our algorithms could potentially serve as a proxy to track the general connotation of real world entities. Table 2 shows example common nouns with connotative polarity. 5.3 Practical Remarks on WSD and MWEs In this work we aim to find the polarity of most prevalent senses of each word, in part because it is not easy to perform unsupervised word sense disambiguation (WSD) on a large corpus in a reliable way, especially when the corpus consists primarily of short n-grams. Although the resulting lexicon loses on some of the polysemous words with potentially opposite polarities, per-word connotation (rather than per-sense connotation) does have a practical value: it provides a convenient option for users who wish to avoid the burden of WSD before utilizing the lexicon. Future work includes handling of WSD and multi-word expressions (MWEs), e.g., “Great Leader” (for Kim Jong-Il), “Inglourious Basterds” (a movie title).21 21These examples credit to an anonymous reviewer. 6 Related Work A very interesting work of Mohammad and Turney (2010) uses Mechanical Turk in order to build the lexicon of emotions evoked by words. In contrast, we present an automatic approach that infers the general connotation of words. Velikovich et al. (2010) use graph propagation algorithms for constructing a web-scale polarity lexicon for sentiment analysis. Although we employ the same graph propagation algorithm, our graph construction is fundamentally different in that we integrate stronger inductive biases into the graph topology and the corresponding edge weights. As shown in our experimental results, we find that judicious construction of graph structure, exploiting multiple complementing linguistic phenomena can enhance both the performance and the efficiency of the algorithm substantially. Other interesting approaches include one based on min-cut (Dong et al., 2012) or LDA (Xie and Li, 2012). Our proposed approaches are more suitable for encoding a much diverse set of linguistic phenomena however. But our work use a few seed predicates with selectional preference instead of relying on word similarity. Some recent work explored the use of constraint optimization framework for inducing domain-dependent sentiment lexicon (Choi and Cardie (2009), Lu et al. (2011)). Our work differs in that we provide comprehensive insights into different formulations of ILP and LP, aiming to learn the much different task of learning the general connotation of words. 7 Conclusion We presented a broad-coverage connotation lexicon that determines the subtle nuanced sentiment of even those words that are objective on the surface, including the general connotation of realworld named entities. Via a comprehensive evaluation, we provided empirical insights into three different types of induction algorithms, and proposed one with good precision, coverage, and efficiency. Acknowledgments This research was supported in part by the Stony Brook University Office of the Vice President for Research. We thank reviewers for many insightful comments and suggestions, and for providing us with several very inspiring examples to work with. 1782 References Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta, may. European Language Resources Association (ELRA). J. Kathryn Bock. 1986. Syntactic persistence in language production. Cognitive psychology, 18(3):355–387. Thorsten Brants and Alex Franz. 2006. {Web 1T 5gram Version 1}. Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 Volume 2, EMNLP ’09, pages 590–598, Stroudsburg, PA, USA. Association for Computational Linguistics. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Comput. Linguist., 16:22–29, March. ILOG CPLEX. 2009. High-performance software for mathematical programming and optimization. U RL http://www.ilog.com/products/cplex. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL ’10, pages 107–116, Stroudsburg, PA, USA. Association for Computational Linguistics. Xishuang Dong, Qibo Zou, and Yi Guan. 2012. Setsimilarity joins based semi-supervised sentiment analysis. In Neural Information Processing, pages 176–183. Springer. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC06), pages 417–422. Song Feng, Ritwik Bose, and Yejin Choi. 2011. Learning general connotation of words using graph-based algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1092–1103. Association for Computational Linguistics. Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 503–511, Boulder, Colorado, June. Association for Computational Linguistics. Vasileios Hatzivassiloglou and Kathleen R McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics, pages 174–181. Association for Computational Linguistics. Bas Heerschop, Alexander Hogenboom, and Flavius Frasincar. 2011. Sentiment lexicon creation from lexical resources. In Business Information Systems, pages 185–196. Springer. Nobuhiro Kaji and Masaru Kitsuregawa. 2007. Building lexicon for sentiment analysis from massive collection of html documents. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. JOURNAL OF THE ACM, 46(5):604–632. Bill Louw. 1993. Irony in the text or insincerity in the writer. Text and technology: In honour of John Sinclair, pages 157–176. Yue Lu, Malu Castellanos, Umeshwar Dayal, and ChengXiang Zhai. 2011. Automatic construction of a context-aware sentiment lexicon: an optimization approach. In Proceedings of the 20th international conference on World wide web, pages 347– 356. ACM. Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26–34, Los Angeles, CA, June. Association for Computational Linguistics. Arturo Montejo-R´aez, Eugenio Mart´ınez-C´amara, M. Teresa Mart´ın-Valdivia, and L. Alfonso Ure˜na L´opez. 2012. Random walk weighting over sentiwordnet for sentiment polarity detection on twitter. In Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis, pages 3–10, Jeju, Korea, July. Association for Computational Linguistics. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical Report 1999-66, Stanford InfoLab, November. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2(12):1–135. Martin J Pickering and Holly P Branigan. 1998. The representation of verbs: Evidence from syntactic priming in language production. Journal of Memory and Language, 39(4):633–651. 1783 Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In Proceedings of the 21st international jont conference on Artifical intelligence, IJCAI’09, pages 1199–1204, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. Defense Technical Information Center. John Sinclair. 1991. Corpus, concordance, collocation. Describing English language. Oxford University Press. Anatol Stefanowitsch and Stefan Th Gries. 2003. Collostructions: Investigating the interaction of words and constructions. International journal of corpus linguistics, 8(2):209–243. Philip J. Stone and Earl B. Hunt. 1963. A computer approach to content analysis: studies using the general inquirer system. In Proceedings of the May 2123, 1963, spring joint computer conference, AFIPS ’63 (Spring), pages 241–256, New York, NY, USA. ACM. Michael Stubbs. 1995. Collocations and semantic profiles: on the cause of the trouble with quantitative studies. Functions of language, 2(1):23–55. Kristina Toutanova and Christopher D. Manning. 2000. Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In In EMNLP/VLC 2000, pages 63–70. Peter Turney. 2001. Mining the web for synonyms: Pmi-ir versus lsa on toefl. Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viability of web-derived polarity lexicons. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation (formerly Computers and the Humanities), 39(2/3):164–210. Theresa Wilson, Paul Hoffmann, Swapna Somasundaran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005a. Opinionfinder: a system for subjectivity analysis. In Proceedings of HLT/EMNLP on Interactive Demonstrations, pages 34–35, Morristown, NJ, USA. Association for Computational Linguistics. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005b. Recognizing contextual polarity in phraselevel sentiment analysis. In HLT ’05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 347–354, Morristown, NJ, USA. Association for Computational Linguistics. Rui Xie and Chunping Li. 2012. Lexicon construction: A topic model approach. In Systems and Informatics (ICSAI), 2012 International Conference on, pages 2299–2303. IEEE. Xiaojin Zhu and Zoubin Ghahramani. 2002. Learning from labeled and unlabeled data with label propagation. In Technical Report CMU-CALD-02-107. CarnegieMellon University. 1784
2013
174
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 176–186, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Microblogs as Parallel Corpora Wang Ling123 Guang Xiang2 Chris Dyer2 Alan Black2 Isabel Trancoso 13 (1)L2F Spoken Systems Lab, INESC-ID, Lisbon, Portugal (2)Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA, USA (3)Instituto Superior T´ecnico, Lisbon, Portugal {lingwang,guangx,cdyer,awb}@cs.cmu.edu [email protected] Abstract In the ever-expanding sea of microblog data, there is a surprising amount of naturally occurring parallel text: some users create post multilingual messages targeting international audiences while others “retweet” translations. We present an efficient method for detecting these messages and extracting parallel segments from them. We have been able to extract over 1M Chinese-English parallel segments from Sina Weibo (the Chinese counterpart of Twitter) using only their public APIs. As a supplement to existing parallel training data, our automatically extracted parallel data yields substantial translation quality improvements in translating microblog text and modest improvements in translating edited news commentary. The resources in described in this paper are available at http://www.cs.cmu.edu/∼lingwang/utopia. 1 Introduction Microblogs such as Twitter and Facebook have gained tremendous popularity in the past 10 years. In addition to being an important form of communication for many people, they often contain extremely current, even breaking, information about world events. However, the writing style of microblogs tends to be quite colloquial, with frequent orthographic innovation (R U still with me or what?) and nonstandard abbreviations (idk! shm)—quite unlike the style found in more traditional, edited genres. This poses considerable problems for traditional NLP tools, which were developed with other domains in mind, which often make strong assumptions about orthographic uniformity (i.e., there is just one way to spell you). One approach to cope with this problem is to annotate in-domain data (Gimpel et al., 2011). Machine translation suffers acutely from the domain-mismatch problem caused by microblog text. On one hand, standard models are probably suboptimal since they (like many models) assume orthographic uniformity in the input. However, more acutely, the data used to develop these systems and train their models is drawn from formal and carefully edited domains, such as parallel web pages and translated legal documents. MT training data seldom looks anything like microblog text. This paper introduces a method for finding naturally occurring parallel microblog text, which helps address the domain-mismatch problem. Our method is inspired by the perhaps surprising observation that a reasonable number of microblog users tweet “in parallel” in two or more languages. For instance, the American entertainer Snoop Dogg regularly posts parallel messages on Sina Weibo (Mainland China’s equivalent of Twitter), for example, watup Kenny Mayne!! - Kenny Mayne,最近这么样啊!!, where an English message and its Chinese translation are in the same post, separated by a dash. Our method is able to identify and extract such translations. Briefly, this requires determining if a tweet contains more than one language, if these multilingual utterances contain translated material (or are due to something else, such as code switching), and what the translated spans are. The paper is organized as follows. Section 2 describes the related work in parallel data extraction. Section 3 presents our model to extract parallel data within the same document. Section 4 describes our extraction pipeline. Section 5 describes the data we gathered from both Sina Weibo (Chinese-English) and Twitter (Chinese-English and Arabic-English). We then present experiments showing that our harvested data not only substantially improves translations of microblog text with 176 existing (and arguably inappropriate) translation models, but that it improves the translation of more traditional MT genres, like newswire. We conclude in Section 6. 2 Related Work Automatic collection of parallel data is a wellstudied problem. Approaches to finding parallel web documents automatically have been particularly important (Resnik and Smith, 2003; Fukushima et al., 2006; Li and Liu, 2008; Uszkoreit et al., 2010; Ture and Lin, 2012). These broadly work by identifying promising candidates using simple features, such as URL similarity or “gist translations” and then identifying truly parallel segments with more expensive classifiers. More specialized resources were developed using manual procedures to leverage special features of very large collections, such as Europarl (Koehn, 2005). Mining parallel or comparable messages from microblogs has mainly relied on Cross-Lingual Information Retrieval techniques (CLIR). Jelh et al. (2012) attempt to find pairs of tweets in Twitter using Arabic tweets as search queries in a CLIR system. Afterwards, the model described in (Xu et al., 2001) is applied to retrieve a set of ranked translation candidates for each Arabic tweet, which are then used as parallel candidates. The work on mining parenthetical translations (Lin et al., 2008), which attempts to find translations within the same document, has some similarities with our work, since parenthetical translations are within the same document. However, parenthetical translations are generally used to translate names or terms, which is more limited than our work which extracts whole sentence translations. Finally, crowd-sourcing techniques to obtain translations have been previously studied and applied to build datasets for casual domains (Zbib et al., 2012; Post et al., 2012). These approaches require remunerated workers to translate the messages, and the amount of messages translated per day is limited. We aim to propose a method that acquires large amounts of parallel data for free. The drawback is that there is a margin of error in the parallel segment identification and alignment. However, our system can be tuned for precision or for recall. 3 Parallel Segment Retrieval We will first abstract from the domain of Microblogs and focus on the task of retrieving parallel segments from single documents. Prior work on finding parallel data attempts to reason about the probability that pairs of documents (x, y) are parallel. In contrast, we only consider one document at a time, defined by x = x1, x2, . . . , xn, and consisting of n tokens, and need to determine whether there is parallel data in x, and if so, where are the parallel segments and their languages. For simplicity, we assume that there are at most 2 continuous segments that are parallel. As representation for the parallel segments within the document, we use the tuple ([p, q], l, [u, v], r, a). The word indexes [p, q] and [u, v] are used to identify the left segment (from p to q) and right segment (from u to v), which are parallel. We shall refer [p, q] and [u, v] as the spans of the left and right segments. To avoid overlaps, we set the constraint p ≤q < u ≤v. Then, we use l and r to identify the language of the left and right segments, respectively. Finally, a represents the word alignment between the words in the left and the right segments. The main problem we address is to find the parallel data when the boundaries of the parallel segments are not defined explicitly. If we knew the indexes [p, q] and [u, v], we could simply run a language detector for these segments to find l and r. Then, we would use an word alignment model (Brown et al., 1993; Vogel et al., 1996), with source s = xp, . . . , xq, target t = xu, . . . , xv and lexical table θl,r to calculate the Viterbi alignment a. Finally, from the probability of the word alignments, we can determine whether the segments are parallel. Thus, our model will attempt to find the optimal values for the segments [p, q][u, v], languages l, r and word alignments a jointly. However, there are two problems with this approach. Firstly, word alignment models generally attribute higher probabilities to smaller segments, since these are the result of a smaller product chain of probabilities. In fact, because our model can freely choose the segments to align, choosing only one word as the left segment that is well aligned to a word in the right segment would be the best choice. This is obviously not our goal, since we would not obtain any useful sentence pairs. Secondly, inference must be performed over the combination of all latent variables, which is intractable using 177 a brute force algorithm. We shall describe our model to solve the first problem in 3.1 and our dynamic programming approach to make the inference tractable in 3.2. 3.1 Model We propose a simple (non-probabilistic) threefactor model that models the spans of the parallel segments, their languages, and word alignments jointly. This model is defined as follows: S([u, v], r, [p, q],l, a | x) = Sα S([p, q], [u, v] | x)× Sβ L(l, r | [p, q], [u, v], x)× Sγ T (a | [p, q], l, [u, v], r, x) Each of the components is weighted by the parameters α, β and γ. We set these values empirically α = 0.3, β = 0.3 and γ = 0.4, and leave the optimization of these parameters as future work. We discuss the components of this model in turn. Span score SS. We define the score of hypothesized pair of spans [p, q], [u, v] as: SS([p, q], [u, v] | x) = (q −p + 1) + (v −u + 1) P 0<p′≤q′<u′≤v′≤n(q′ −p′ + 1) + (v′ −u′ + 1)× ψ([p, q], [u, v], x) The first factor is a distribution over all spans that assigns higher probability to segmentations that cover more words in the document. It is highest for segmentations that cover all the words in the document (this is desirable since there are many sentence pairs that can be extracted but we want to find the largest sentence pair in the document). The function ψ takes on values of 0 or 1 depending on whether certain constraints are violated, these include: parenthetical constraints that enforce that spans must not break text within parenthetical characters and language constraints that ensure that we do break a sequence of Mandarin characters, Arabic words or Latin words. Language score SL. The language score SL(l, r | [p, q], [u, v], x) indicates whether the language labels l, r are appropriate to the document contents: SL(l, r | [p, q], [u, v], x) = Pq i=p L(l, xi) + Pv i=u L(r, xi) n where L(l, x) is a language detection function that yields 1 if the word xi is in language l, and 0 otherwise. We build the function simply by considering all words that are composed of Latin characters as English, Arabic characters as Arabic and Han characters as Mandarin. This approach is not perfect, but it is simple and works reasonably well for our purposes. Translation score ST . The translation score ST (a | [p, q], l, [u, v], r) indicates whether [p, q] is a reasonable translation of [u, v] with the alignment a. We rely on IBM Model 1 probabilities for this score: ST (a | [p, q], l, [u, v], r, x) = 1 (q −p + 1)v−u+2 v Y i=u PM1(xi | xai). The lexical tables PM1 for the various language pairs are trained a priori using available parallel corpora. While IBM Model 1 produces worse alignments than other models, in our problem, we need to efficiently consider all possible spans, language pairs and word alignments, which makes the problem intractable. We will show that dynamic programing can be used to make this problem tractable, using Model 1. Furthermore, IBM Model 1 has shown good performance for sentence alignment systems previously (Xu et al., 2005; Braune and Fraser, 2010). 3.2 Inference Our goal is to find the spans, language pair and alignments such that: arg max [p,q],l,[u,v],r,a S([p, q], l, [u, v], r, a | x) (1) A high score indicates that the predicted bispan is likely to correspond to a valid parallel span, so we set a constant threshold τ to determine whether a document has parallel data, i.e., the value of z: z∗= max [u,v],r,[p,q],l,a S([u, v], r, [p, q], l, a | x) > τ Naively maximizing Eq. 1 would require O(|x|6) operations, which is too inefficient to be practical on large datasets. To process millions of documents, this process would need to be optimized. The main bottleneck of the naive algorithm is finding new Viterbi Model 1 word alignments every time we change the spans. Thus, we propose 178 an iterative approach to compute the Viterbi word alignments for IBM Model 1 using dynamic programming. Dynamic programming search. The insight we use to improve the runtime is that the Viterbi word alignment of a bispan can be reused to calculate the Viterbi word alignments of larger bispans. The algorithm operates on a 4-dimensional chart of bispans. It starts with the minimal valid span (i.e., [0, 0], [1, 1]) and progressively builds larger spans from smaller ones. Let Ap,q,u,v represent the Viterbi alignment (under ST ) of the bispan [p, q], [u, v]. The algorithm uses the following recursions defined in terms of four operations λ{+v,+u,+p,+q} that manipulate a single dimension of the bispan to construct larger spans: • Ap,q,u,v+1 = λ+v(Ap,q,u,v) adds one token to the end of the right span with index v + 1 and find the viterbi alignment for that token. This requires iterating over all the tokens in the left span, [p, q] and possibly updating their alignments. See Fig. 1 for an illustration. • Ap,q,u+1,v = λ+u(Ap,q,u,v) removes the first token of the right span with index u, so we only need to remove the alignment from u, which can be done in time O(1). • Ap,q+1,u,v = λ+q(Ap,q,u,v) adds one token to the end of the left span with index q + 1, we need to check for each word in the right span, if aligning to the word in index q+1 yields a better translation probability. This update requires n− q + 1 operations. • Ap+1,q,u,v = λ+p(Ap,q,u,v) removes the first token of the left span with index p. After removing the token, we need to find new alignments for all tokens that were aligned to p. Thus, the number of operations for this update is K × (q −p + 1), where K is the number of words that were aligned to p. In the best case, no words are aligned to the token in p, and we can simply remove it. In the worst case, if all target words were aligned to p, this update will result in the recalculation of all Viterbi Alignments. The algorithm proceeds until all valid cells have been computed. One important aspect is that the update functions differ in complexity, so the sequence of updates we apply will impact the performance of the system. Most spans are reachable using any of the four update functions. For instance, the span A2,3,4,5 can be reached using λ+v(A2,3,4,4), λ+u(A2,3,3,5), λ+q(A2,2,4,5) or λ+p(A1,3,4,5). However, we want to use λ+u a b A B a b A B a b A B p q u v p q u v λ+v Figure 1: Illustration of the λ+v operator. The light gray boxes show the parallel span and the dark boxes show the span’s Viterbi alignment. In this example, the parallel message contains a “translation” of a b to A B. whenever possible, since it only requires one operation, although that is not always possible. For instance, the state A2,2,2,4 cannot be reached using λ+u, since the state A2,2,1,4 is not valid, because the spans overlap. If this happens, incrementally more expensive updates need to be used, such as λ+v, then λ+q, which are in the same order of complexity. Finally, we want to minimize the use of λ+p, which is quadratic in the worst case. Thus, we use the following recursive formulation that guarantees the optimal outcome: Ap,q,u,v =          λ+u(Ap,q,u−1,v) if u > q + 1 λ+v(Ap,q,u,v−1) else if v > q + 1 λ+p(Ap−1,q,u,v) else if q = p + 1 λ+q(Ap,q−1,u,v) otherwise This transition function applies the cheapest possible update to reach state Ap,q,u,v. Complexity analysis. We can see that λ+u is only needed in the following the cases [0, 1][2, 2], [1, 2][3, 3], · · · , [n −2, n −1][n, n]. Since, this update is quadratic in the worst case, the complexity of this operations is O(n3). The update λ+q, is applied to the cases [∗, 1][2, 2], [∗, 2][3, 3], · · · , [∗, n−1], [n, n], where ∗denotes any number within the span constraints but not present in previous updates. Since, the update is linear and we need to iterate through all tokens twice, this update takes O(n3) operations. The update λ+v is applied for the cases [∗, 1][2, ∗], [∗, 2][3, ∗], · · · , [∗, n −1], [n, ∗]. Thus, with three degrees of freedom and a linear update, it runs in O(n4) time. Finally, update λ+u runs in constant time, but is run for all remaining cases, which constitute O(n4) space. By summing the 179 executions of all updates, we observe that the order of magnitude of our exact inference process is O(n4). Note that for exact inference, it is not possible to get a lower order of magnitude, since we need to at least iterate through all possible span values once, which takes O(n4) time. 4 Parallel Data Extraction We will now describe our method to extract parallel data from Microblogs. The target domains in this work are Twitter and Sina Weibo, and the main language pair is Chinese-English. Furthermore, we also run the system for the ArabicEnglish language pair using the Twitter data. For the Twitter domain, we use a previously crawled dataset from the years 2008 to 2013, where one million tweets are crawled every day. In total, we processed 1.6 billion tweets. Regarding Sina Weibo, we built a crawler that continuously collects tweets from Weibo. We start from one seed user and collect his posts, and then we find the users he follows that we have not considered, and repeat. Due to the rate limiting established by the Weibo API1, we are restricted in terms of number of requests every hour, which greatly limits the amount of messages we can collect. Furthermore, each request can only fetch up to 100 posts from a user, and subsequent pages of 100 posts require additional API calls. Thus, to optimize the number of parallel posts we can collect per request, we only crawl all messages from users that have at least 10 parallel tweets in their first 100 posts. The number of parallel messages is estimated by running our alignment model, and checking if τ > φ, where φ was set empirically initially, and optimized after obtaining annotated data, which will be detailed in 5.1. Using this process, we crawled 65 million tweets from Sina Weibo within 4 months. In both cases, we first filter the collection of tweets for messages containing at least one trigram in each language of the target language pair, determined by their Unicode ranges. This means that for the Chinese-English language pair, we only keep tweets with more than 3 Mandarin characters and 3 latin words. Furthermore, based on the work in (Jelh et al., 2012), if a tweet A is identified as a retweet, meaning that it references another tweet B, we also consider the hypothesis that these tweets may be mutual translations. Thus, if A and B contain trigrams in different languages, 1http://open.weibo.com/wiki/API文档/en these are also considered for the extraction of parallel data. This is done by concatenating tweets A and B, and adding the constraint that [p, q] must be within A and [u, v] must be within B. Finally, identical duplicate tweets are removed. After filtering, we obtained 1124k ZH-EN tweets from Sina Weibo, 868k ZH-EN and 136k AR-EN tweets from Twitter. These language pairs are not definite, since we simply check if there is a trigram in each language. Finally, we run our alignment model described in section 3, and obtain the parallel segments and their scores, which measure how likely those segments are parallel. In this process, lexical tables for EN-ZH language pair used by Model 1 were built using the FBIS dataset (LDC2003E14) for both directions, a corpus of 300K sentence pairs from the news domain. Likewise, for the ENAR language pair, we use a fraction of the NIST dataset, by removing the data originated from UN, which leads to approximately 1M sentence pairs. 5 Experiments We evaluate our method in two ways. First, intrinsically, by observing how well our method identifies tweets containing parallel data, the language pair and what their spans are. Second, extrinsically, by looking at how well the data improves a translation task. This methodology is similar to that of Smith et al. (2010). 5.1 Parallel Data Extraction Data. Our method needs to determine if a given tweet contains parallel data, and if so, what is the language pair of the data, and what segments are parallel. Thus, we had a native Mandarin speaker, also fluent in English, to annotate 2000 tweets sampled from crawled Weibo tweets. One important question of answer is what portion of the Microblogs contains parallel data. Thus, we also use the random sample Twitter and annotated 1200 samples, identifying whether each sample contains parallel data, for the EN-ZH and AR-EN filtered tweets. Metrics. To test the accuracy of the score S, we ordered all 2000 samples by score. Then, we calculate the precision, recall and accuracy at increasing intervals of 10% of the top samples. We count as a true positive (tp) if we correctly identify a parallel tweet, and as a false positive (fp) spuriously detect a parallel tweet. Finally, a true negative (tn) occurs when we correctly detect a non-parallel 180 tweet, and a false negative (fn) if we miss a parallel tweet. Then, we set the precision as tp tp+fp, recall as tp tp+fn and accuracy as tp+tn tp+fp+tn+fn. For language identification, we calculate the accuracy based on the number of instances that were identified with the correct language pair. Finally, to evaluate the segment alignment, we use the Word Error Rate (WER) metric, without substitutions, where we compare the left and right spans of our system and the respective spans of the reference. We count an insertion error (I) for each word in our system’s spans that is not present in the reference span and a deletion error (D) for each word in the reference span that is not present in our system’s spans. Thus, we set WER = D+I N , where N is the number of tokens in the tweet. To compute this score for the whole test set, we compute the average of the WER for each sample. Results. The precision, recall and accuracy curves are shown in Figure 2. The quality of the parallel sentence detection did not vary significantly with different setups, so we will only show the results for the best setup, which is the baseline model with span constraints. 0.2   0.3   0.4   0.5   0.6   0.7   0.8   0.9   1   10%   20%   30%   40%   50%   60%   70%   80%   90%   100%   Precision   Recall   Accuracy   Figure 2: Precision, recall and accuracy curves for parallel data detection. The y-axis denotes the scores for each metric, and the x-axis denotes the percentage of the highest scoring sentence pairs that are kept. From the precision and recall curves, we observe that most of the parallel data can be found at the top 30% of the filtered tweets, where 5 in 6 tweets are detected correctly as parallel, and only 1 in every 6 parallel sentences is lost. We will denote the score threshold at this point as φ, which is a good threshold to estimate on whether the tweet is parallel. However, this parameter can be tuned for precision or recall. We also see that in total, 30% of the filtered tweets are parallel. If we generalize this ratio for the complete set with 1124k tweets, we can expect approximately 337k parallel sentences. Finally, since 65 million tweets were extracted to generate the 337k tweets, we estimate that approximately 1 parallel tweet can be found for every 200 tweets we process using our targeted approach. On the other hand, from the 1200 tweets from Twitter, we found that 27 had parallel data in the ZH-EN pair, if we extrapolate for the whole 868k filtered tweets, we expect that we can find 19530. 19530 parallel sentences from 1.6 billion tweets crawled randomly, represents 0.001% of the total corpora. For AR-EN, a similar result was obtained where we expect 12407 tweets out of the 1.6 billion to be parallel. This shows that targeted approaches can substantially reduce the crawling effort required to find parallel tweets. Still, considering that billions of tweets are posted daily, this is a substantial source of parallel data. The remainder of the tests will be performed on the Weibo dataset, which contains more parallel data. Tests on the Twitter data will be conducted as future work, when we process Twitter data on a larger scale to obtain more parallel sentences. For the language identification task, we had an accuracy of 99.9%, since distinguishing English and Mandarin is trivial. The small percentage of errors originated from other latin languages (Ex: French) due to our naive language detector. As for the segment alignment task. Our baseline system with no constraints obtains a WER of 12.86%, and this can be improved to 11.66% by adding constraints to possible spans. This shows that, on average, approximately 1 in 9 words on the parallel segments is incorrect. However, translation models are generally robust to such kinds of errors and can learn good translations even in the presence of imperfect sentence pairs. Among the 578 tweets that are parallel, 496 were extracted within the same tweet and 82 were extracted from retweets. Thus, we see that the majority of the parallel data comes from within the same tweet. Topic analysis. To give an intuition about the contents of the parallel data we found, we looked at the distribution over topics of the parallel dataset inferred by LDA (Blei et al., 2003). Thus, we grouped the Weibo filtered tweets by users, and ran LDA over the predicted English segments, with 12 topics. The 7 most interpretable topics are shown in Table 1. We see that the data contains a 181 # Topic Most probable words in topic 1 (Dating) love time girl live mv back word night rt wanna 2 (Entertainment) news video follow pong image text great day today fans 3 (Music) cr day tour cn url amazon music full concert alive 4 (Religion) man god good love life heart would give make lord 5 (Nightlife) cn url beijing shanqi party adj club dj beijiner vt 6 (Chinese News) china chinese year people world beijing years passion country government 7 (Fashion) street fashion fall style photo men model vogue spring magazine Table 1: Most probable words inferred using LDA in several topics from the parallel data extracted from Weibo. Topic labels (in parentheses) were assigned manually for illustration purposes. variety of topics, both formal (Chinese news, religion) and informal (entertainment, music). Example sentence pairs. To gain some perspective on the type of sentence pairs we are extracting, we will illustrate some sentence pairs we crawled and aligned automatically. Table 2 contains 5 English-Mandarin and 4 English-Arabic sentence pairs that were extracted automatically. These were chosen, since they contain some aspects that are characteristic of the text present in Microblogs and Social Media. These are: • Abbreviations - In most sentence pairs examples, we can witness the use of abbreviated forms of English words, such as wanna, TMI, 4 and imma. These can be normalized as want to, too much information, for and I am going to, respectively. In sentence 5, we observe that this phenomena also occurs in Mandarin. We find that TMD is a popular way to write 他妈的 whose Pinyin rendering is t¯a m¯a de. The meaning of this expression depends on the context it is used, and can convey a similar connotation as adding the intensifier the hell to an English sentence. • Jargon - Another common phenomena is the appearance of words that are only used in subcommunities. For instance, in sentence pair 4, we the jargon word cday is used, which is a colloquial variant for birthday. • Emoticons - In sentence 8, we observe the presence of the emoticon :), which is frequently used in this media. We found that emoticons are either translated as they are or simply removed, in most cases. • Syntax errors - In the domain of microblogs, it is also common that users do not write strictly syntactic sentences, for instance, in sentence pair 7, the sentence onni this gift only 4 u, is clearly not syntactically correct. Firstly, onni is a named entity, yet it is not capitalized. Secondly, a comma should follow onni. Thirdly, the verb is should be used after gift. Having examples of these sentences in the training set, with common mistakes (intentional or not), might become a key factor in training MT systems that can be robust to such errors. • Dialects - We can observe a much broader range of dialects in our data, since there are no dialect standards in microblogs. For instance, in sentence pair 6, we observe an arabic word (in bold) used in the spoken Arabic dialect used in some countries along the shores of the Persian Gulf, which means means the next. In standard Arabic, a significantly different form is used. We can also see in sentence pair 9 that our aligner does not alway make the correct choice when determining spans. In this case, the segment RT @MARYAMALKHAWAJA: was included in the English segment spuriously, since it does not correspond to anything in the Arabic counterpart. 5.2 Machine Translation Experiments We report on machine translation experiments using our harvested data in two domains: edited news and microblogs. News translation. For the news test, we created a new test set from a crawl of the ChineseEnglish documents on the Project Syndicate website2, which contains news commentary articles. We chose to use this data set, rather than more standard NIST test sets to ensure that we had recent documents in the test set (the most recent NIST test sets contain documents published in 2007, well before our microblog data was created). We extracted 1386 parallel sentences for tuning and another 1386 sentences for testing, from the manually aligned segments. For this test set, we used 8 million sentences from the full NIST parallel dataset as the language model training data. We shall call this test set Syndicate. 2http://www.project-syndicate.org/ 182 ENGLISH MANDARIN 1 i wanna live in a wes anderson world 我想要生活在Wes Anderson的世界里 2 Chicken soup, corn never truly digests. TMI. 鸡汤吧,玉米神马的从来没有真正消化过.恶心 3 To DanielVeuleman yea iknw imma work on that 对DanielVeuleman说,是的我知道,我正在向那方面努力 4 msg 4 Warren G his cday is today 1 yr older. 发信息给Warren G,今天是他的生日,又老了一岁了。 5 Where the hell have you been all these years? 这些年你TMD到哪去了 ENGLISH ARABIC 6 It’s gonna be a warm week! Qk ø AJ Ë@ ¨ ñJ.ƒB@ 7 onni this gift only 4 u ½Ë ¡® ¯ éK YêË@ è Yë ú Gð @ 8 sunset in aqaba :) (:éJ.®ªË@ ú ¯ Ò ‚Ë@ H. ðQ « 9 RT @MARYAMALKHAWAJA: there is a call @Y « ‡£A JÓ èY« ú ¯ H@QëA ¢ÖÏ Z@Y K ¼A Jë for widespread protests in #bahrain tmrw Table 2: Examples of English-Mandarin and English-Arabic sentence pairs. The English-Mandarin sentences were extracted from Sina Weibo and the English-Arabic sentences were extracted from Twitter. Some messages have been shorted to fit into the table. Some interesting aspects of these sentence pairs are marked in bold. Microblog translation. To carry out the microblog translation experiments, we need a high quality parallel test set. Since we are not aware of such a test set, we created one by manually selecting parallel messages from Weibo. Our procedure was as follows. We selected 2000 candidate Weibo posts from users who have a high number of parallel tweets according to our automatic method (at least 2 in every 5 tweets). To these, we added another 2000 messages from our targeted Weibo crawl, but these had no requirement on the proportion of parallel tweets they had produced. We identified 2374 parallel segments, of which we used 1187 for development and 1187 for testing. We refer to this test set as Weibo.3 Obviously, we removed the development and test sets from our training data. Furthermore, to ensure that our training data was not too similar to the test set in the Weibo translation task, we filtered the training data to remove near duplicates by computing edit distance between each parallel sentence in the heldout set and each training instance. If either the source or the target sides of the a training instance had an edit distance of less than 10%, we removed it.4 As for the language models, we collected a further 10M tweets from Twitter for the English language model and another 10M tweets from Weibo for the Chinese language model. 3We acknowledge that self-translated messages are probably not a typically representative sample of all microblog messages. However, we do not have the resources to produce a carefully curated test set with a more broadly representative distribution. Still, we believe these results are informative as long as this is kept in mind. 4Approximately 150,000 training instances removed. Syndicate Weibo ZH-EN EN-ZH ZH-EN EN-ZH FBIS 9.4 18.6 10.4 12.3 NIST 11.5 21.2 11.4 13.9 Weibo 8.75 15.9 15.7 17.2 FBIS+Weibo 11.7 19.2 16.5 17.8 NIST+Weibo 13.3 21.5 16.9 17.9 Table 3: BLEU scores for different datasets in different translation directions (left to right), broken with different training corpora (top to bottom). Baselines. We report results on these test sets using different training data. First, we use the FBIS dataset which contains 300K high quality sentence pairs, mostly in the broadcast news domain. Second, we use the full 2012 NIST Chinese-English dataset (approximately 8M sentence pairs, including FBIS). Finally, we use our crawled data (referred as Weibo) by itself and also combined with the two previous training sets. Setup. We use the Moses phrase-based MT system with standard features (Koehn et al., 2003). For reordering, we use the MSD reordering model (Axelrod et al., 2005). As the language model, we use a 5-gram model with KneserNey smoothing. The weights were tuned using MERT (Och, 2003). Results are presented with BLEU-4 (Papineni et al., 2002). Results. The BLEU scores for the different parallel corpora are shown in Table 3 and the top 10 out-of-vocabulary (OOV) words for each dataset are shown in Table 4. We observe that for the Syndicate test set, the NIST and FBIS datasets 183 Syndicate (test) Weibo (test) FBIS NIST Weibo FBIS NIST Weibo obama (83) barack (59) democracies (15) 2012 (24) showstudio (9) submissions (4) barack (59) namo (6) imbalances (13) alanis (13) crue (9) ivillage (4) princeton (40) mitt (6) mahmoud (12) crue (9) overexposed (8) scola (3) ecb (8) guant (6) millennium (9) showstudio (9) tweetmeian (5) rbst (3) bernanke (8) fairtrade (6) regimes (8) overexposed (8) tvd (5) curitiba (3) romney (7) hollande (5) wolfowitz (7) itunes (8) iheartradio (5) zeman (2) gaddafi(7) wikileaks (4) revolutions (7) havoc (8) xoxo (4) @yaptv (2) merkel (7) wilders (3) qaddafi(7) sammy (6) snoop (4) witnessing (2) fats (7) rant (3) geopolitical (7) obama (6) shinoda (4) whoohooo (2) dialogue (7) esm (3) genome (7) lol (6) scrapbook (4) wbr (2) Table 4: The most frequent out-of-vocabulary (OOV) words and their counts for the two English-source test sets with three different training sets. perform better than our extracted parallel data. This is to be expected, since our dataset was extracted from an extremely different domain. However, by combining the Weibo parallel data with this standard data, improvements in BLEU are obtained. Error analysis indicates that one major factor is that names from current events, such as Romney and Wikileaks do not occur in the older NIST and FBIS datasets, but they are represented in the Weibo dataset. Furthermore, we also note that the system built on the Weibo dataset does not perform substantially worse than the one trained on the FBIS dataset, a further indication that harvesting parallel microblog data yields a diverse collection of translated material. For the Weibo test set, a significant improvement over the news datasets can be achieved using our crawled parallel data. Once again newer terms, such as iTunes, are one of the reasons older datasets perform less well. However, in this case, the top OOV words of the news domain datasets are not the most accurate representation of coverage problems in this domain. This is because many frequent words in microblogs, e.g., nonstandard abbreviations, like u and 4 are found in the news domain as words, albeit with different meanings. Thus, the OOV table gives an incomplete picture of the translation problems when using the news domain corpora to translate microblogs. Also, some structural errors occur when training with the news domain datasets, one such example is shown in table 5, where the character 说is incorrectly translated to said. This occurs because this type of constructions is infrequent in news datasets. Furthermore, we can see that compound expressions, such as the translation from 派对时 刻to party time are also learned. Finally, we observe that combining the datasets Source 对sam farrar 说,派对时刻 Reference to sam farrar , party time FBIS farrar to sam said , in time NIST to sam farrar said , the moment WEIBO to sam farrar , party time Table 5: Translation Examples using different training sets. yields another gain over individual datasets, both in the Syndicate and in the Weibo test sets. 6 Conclusion We presented a framework to crawl parallel data from microblogs. We find parallel data from single posts, with translations of the same sentence in two languages. We show that a considerable amount of parallel sentence pairs can be crawled from microblogs and these can be used to improve Machine Translation by updating our translation tables with translations of newer terms. Furthermore, the in-domain data can substantially improve the translation quality on microblog data. The resources described in this paper and further developments are available to the general public at http://www.cs.cmu.edu/∼lingwang/utopia. Acknowledgements The PhD thesis of Wang Ling is supported by FCT grant SFRH/BD/51157/2010. The authors wish to express their gratitude to thank William Cohen, Noah Smith, Waleed Ammar, and the anonymous reviewers for their insight and comments. We are also extremely grateful to Brendan O’Connor for providing the Twitter data and to Philipp Koehn and Barry Haddow for providing the Project Syndicate data. 184 References [Axelrod et al.2005] Amittai Axelrod, Ra Birch Mayne, Chris Callison-burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT. [Blei et al.2003] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. [Braune and Fraser2010] Fabienne Braune and Alexander Fraser. 2010. Improved unsupervised sentence alignment for symmetrical and asymmetrical parallel corpora. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING ’10, pages 81–89, Stroudsburg, PA, USA. Association for Computational Linguistics. [Brown et al.1993] Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Linguist., 19:263–311, June. [Fukushima et al.2006] Ken’ichi Fukushima, Kenjiro Taura, and Takashi Chikayama. 2006. A fast and accurate method for detecting English-Japanese parallel texts. In Proceedings of the Workshop on Multilingual Language Resources and Interoperability, pages 60–67, Sydney, Australia, July. Association for Computational Linguistics. [Gimpel et al.2011] Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Partof-speech tagging for twitter: annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, HLT ’11, pages 42–47, Stroudsburg, PA, USA. Association for Computational Linguistics. [Jelh et al.2012] Laura Jelh, Felix Hiebel, and Stefan Riezler. 2012. Twitter translation using translationbased cross-lingual retrieval. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 410–421, Montr´eal, Canada, June. Association for Computational Linguistics. [Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 48–54, Morristown, NJ, USA. Association for Computational Linguistics. [Koehn2005] Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. AAMT, AAMT. [Li and Liu2008] Bo Li and Juan Liu. 2008. Mining Chinese-English parallel corpora from the web. In Proceedings of the 3rd International Joint Conference on Natural Language Processing (IJCNLP). [Lin et al.2008] Dekang Lin, Shaojun Zhao, Benjamin Van Durme, and Marius Pas¸ca. 2008. Mining parenthetical translations from the web by word alignment. In Proceedings of ACL-08: HLT, pages 994– 1002, Columbus, Ohio, June. Association for Computational Linguistics. [Och2003] Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 160–167, Stroudsburg, PA, USA. Association for Computational Linguistics. [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. [Post et al.2012] Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel corpora for six indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 401–409, Montr´eal, Canada, June. Association for Computational Linguistics. [Resnik and Smith2003] Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29:349–380. [Smith et al.2010] Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. [Ture and Lin2012] Ferhan Ture and Jimmy Lin. 2012. Why not grab a free lunch? mining large corpora for parallel sentences to improve translation modeling. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 626–630, Montr´eal, Canada, June. Association for Computational Linguistics. [Uszkoreit et al.2010] Jakob Uszkoreit, Jay Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1101– 1109. [Vogel et al.1996] Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics - Volume 2, COLING ’96, pages 836–841, Stroudsburg, PA, USA. Association for Computational Linguistics. [Xu et al.2001] Jinxi Xu, Ralph Weischedel, and Chanh Nguyen. 2001. Evaluating a probabilistic model 185 for cross-lingual information retrieval. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’01, pages 105–110, New York, NY, USA. ACM. [Xu et al.2005] Jia Xu, Richard Zens, and Hermann Ney. 2005. Sentence segmentation using ibm word alignment model 1. In Proceedings of EAMT 2005 (10th Annual Conference of the European Association for Machine Translation, pages 280–287. [Zbib et al.2012] Rabih Zbib, Erika Malchiodi, Jacob Devlin, David Stallard, Spyros Matsoukas, Richard Schwarz, John Makhoul, Omar F. Zaidan, and Chris Callison-Burch. 2012. Machine translation of Arabic dialects. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 186
2013
18
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 187–195, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Improved Bayesian Logistic Supervised Topic Models with Data Augmentation Jun Zhu, Xun Zheng, Bo Zhang Department of Computer Science and Technology TNLIST Lab and State Key Lab of Intelligent Technology and Systems Tsinghua University, Beijing, China {dcszj,dcszb}@tsinghua.edu.cn; [email protected] Abstract Supervised topic models with a logistic likelihood have two issues that potentially limit their practical use: 1) response variables are usually over-weighted by document word counts; and 2) existing variational inference methods make strict mean-field assumptions. We address these issues by: 1) introducing a regularization constant to better balance the two parts based on an optimization formulation of Bayesian inference; and 2) developing a simple Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm has analytical forms of each conditional distribution without making any restricting assumptions and can be easily parallelized. Empirical results demonstrate significant improvements on prediction performance and time efficiency. 1 Introduction As widely adopted in supervised latent Dirichlet allocation (sLDA) models (Blei and McAuliffe, 2010; Wang et al., 2009), one way to improve the predictive power of LDA is to define a likelihood model for the widely available documentlevel response variables, in addition to the likelihood model for document words. For example, the logistic likelihood model is commonly used for binary or multinomial responses. By imposing some priors, posterior inference is done with the Bayes’ rule. Though powerful, one issue that could limit the use of existing logistic supervised LDA models is that they treat the document-level response variable as one additional word via a normalized likelihood model. Although some special treatment is carried out on defining the likelihood of the single response variable, it is normally of a much smaller scale than the likelihood of the usually tens or hundreds of words in each document. As noted by (Halpern et al., 2012) and observed in our experiments, this model imbalance could result in a weak influence of response variables on the topic representations and thus non-satisfactory prediction performance. Another difficulty arises when dealing with categorical response variables is that the commonly used normal priors are no longer conjugate to the logistic likelihood and thus lead to hard inference problems. Existing approaches rely on variational approximation techniques which normally make strict mean-field assumptions. To address the above issues, we present two improvements. First, we present a general framework of Bayesian logistic supervised topic models with a regularization parameter to better balance response variables and words. Technically, instead of doing standard Bayesian inference via Bayes’ rule, which requires a normalized likelihood model, we propose to do regularized Bayesian inference (Zhu et al., 2011; Zhu et al., 2013b) via solving an optimization problem, where the posterior regularization is defined as an expectation of a logistic loss, a surrogate loss of the expected misclassification error; and a regularization parameter is introduced to balance the surrogate classification loss (i.e., the response log-likelihood) and the word likelihood. The general formulation subsumes standard sLDA as a special case. Second, to solve the intractable posterior inference problem of the generalized Bayesian logistic supervised topic models, we present a simple Gibbs sampling algorithm by exploring the ideas of data augmentation (Tanner and Wong, 1987; van Dyk and Meng, 2001; Holmes and Held, 2006). More specifically, we extend Polson’s method for Bayesian logistic regression (Polson et al., 2012) to the generalized logistic supervised topic models, which are much more challeng187 ing due to the presence of non-trivial latent variables. Technically, we introduce a set of PolyaGamma variables, one per document, to reformulate the generalized logistic pseudo-likelihood model (with the regularization parameter) as a scale mixture, where the mixture component is conditionally normal for classifier parameters. Then, we develop a simple and efficient Gibbs sampling algorithms with analytic conditional distributions without Metropolis-Hastings accept/reject steps. For Bayesian LDA models, we can also explore the conjugacy of the Dirichlet-Multinomial priorlikelihood pairs to collapse out the Dirichlet variables (i.e., topics and mixing proportions) to do collapsed Gibbs sampling, which can have better mixing rates (Griffiths and Steyvers, 2004). Finally, our empirical results on real data sets demonstrate significant improvements on time efficiency. The classification performance is also significantly improved by using appropriate regularization parameters. We also provide a parallel implementation with GraphLab (Gonzalez et al., 2012), which shows great promise in our preliminary studies. The paper is structured as follows. Sec. 2 introduces logistic supervised topic models as a general optimization problem. Sec. 3 presents Gibbs sampling algorithms with data augmentation. Sec. 4 presents experiments. Sec. 5 concludes. 2 Logistic Supervised Topic Models We now present the generalized Bayesian logistic supervised topic models. 2.1 The Generalized Models We consider binary classification with a training set D = {(wd, yd)}D d=1, where the response variable Y takes values from the output space Y = {0, 1}. A logistic supervised topic model consists of two parts — an LDA model (Blei et al., 2003) for describing the words W = {wd}D d=1, where wd = {wdn}Nd n=1 denote the words within document d, and a logistic classifier for considering the supervising signal y = {yd}D d=1. Below, we introduce each of them in turn. LDA: LDA is a hierarchical Bayesian model that posits each document as an admixture of K topics, where each topic Φk is a multinomial distribution over a V -word vocabulary. For document d, the generating process is 1. draw a topic proportion θd ∼Dir(α) 2. for each word n = 1, 2, . . . , Nd: (a) draw a topic1 zdn ∼Mult(θd) (b) draw the word wdn ∼Mult(Φzdn) where Dir(·) is a Dirichlet distribution; Mult(·) is a multinomial distribution; and Φzdn denotes the topic selected by the non-zero entry of zdn. For fully-Bayesian LDA, the topics are random samples from a Dirichlet prior, Φk ∼Dir(β). Let zd = {zdn}Nd n=1 denote the set of topic assignments for document d. Let Z = {zd}D d=1 and Θ = {θd}D d=1 denote all the topic assignments and mixing proportions for the entire corpus. LDA infers the posterior distribution p(Θ, Z, Φ|W) ∝ p0(Θ, Z, Φ)p(W|Z, Φ), where p0(Θ, Z, Φ) = ( ∏ d p(θd|α) ∏ n p(zdn|θd) ) ∏ k p(Φk|β) is the joint distribution defined by the model. As noticed in (Jiang et al., 2012), the posterior distribution by Bayes’ rule is equivalent to the solution of an information theoretical optimization problem min q(Θ,Z,Φ)KL(q(Θ, Z, Φ)∥p0(Θ, Z, Φ))−Eq[log p(W|Z, Φ)] s.t. : q(Θ, Z, Φ) ∈P, (1) where KL(q||p) is the Kullback-Leibler divergence from q to p and P is the space of probability distributions. Logistic classifier: To consider binary supervising information, a logistic supervised topic model (e.g., sLDA) builds a logistic classifier using the topic representations as input features p(y = 1|η, z) = exp(η⊤¯z) 1 + exp(η⊤¯z), (2) where ¯z is a K-vector with ¯zk = 1 N ∑N n=1 I(zk n = 1), and I(·) is an indicator function that equals to 1 if predicate holds otherwise 0. If the classifier weights η and topic assignments z are given, the prediction rule is ˆy|η,z = I(p(y = 1|η, z) > 0.5) = I(η⊤¯z > 0). (3) Since both η and Z are hidden variables, we propose to infer a posterior distribution q(η, Z) that has the minimal expected log-logistic loss R(q(η, Z)) = − ∑ d Eq[log p(yd|η, zd)], (4) which is a good surrogate loss for the expected misclassification loss, ∑ d Eq[I(ˆy|η,zd ̸= yd)], of a Gibbs classifier that randomly draws a model η from the posterior distribution and makes predictions (McAllester, 2003; Germain et al., 2009). In fact, this choice is motivated from the observation that logistic loss has been widely used as a convex surrogate loss for the misclassification 1A K-binary vector with only one entry equaling to 1. 188 loss (Rosasco et al., 2004) in the task of fully observed binary classification. Also, note that the logistic classifier and the LDA likelihood are coupled by sharing the latent topic assignments z. The strong coupling makes it possible to learn a posterior distribution that can describe the observed words well and make accurate predictions. Regularized Bayesian Inference: To integrate the above two components for hybrid learning, a logistic supervised topic model solves the joint Bayesian inference problem min q(η,Θ,Z,Φ) L(q(η, Θ, Z, Φ)) + cR(q(η, Z)) (5) s.t.: q(η, Θ, Z, Φ) ∈P, where L(q) = KL(q||p0(η, Θ, Z, Φ)) − Eq[log p(W|Z, Φ)] is the objective for doing standard Bayesian inference with the classifier weights η; p0(η, Θ, Z, Φ) = p0(η)p0(Θ, Z, Φ); and c is a regularization parameter balancing the influence from response variables and words. In general, we define the pseudo-likelihood for the supervision information ψ(yd|zd, η) = pc(yd|η, zd) = {exp(η⊤¯zd)}cyd (1 + exp(η⊤¯zd))c , (6) which is un-normalized if c ̸= 1. But, as we shall see this un-normalization does not affect our subsequent inference. Then, the generalized inference problem (5) of logistic supervised topic models can be written in the “standard” Bayesian inference form (1) min q(η,Θ,Z,Φ) L(q(η, Θ, Z, Φ)) −Eq[log ψ(y|Z, η)] (7) s.t.: q(η, Θ, Z, Φ) ∈P, where ψ(y|Z, η) = ∏ d ψ(yd|zd, η). It is easy to show that the optimum solution of problem (5) or the equivalent problem (7) is the posterior distribution with supervising information, i.e., q(η, Θ, Z, Φ) = p0(η, Θ, Z, Φ)p(W|Z, Φ)ψ(y|η, Z) ϕ(y, W) . where ϕ(y, W) is the normalization constant to make q a distribution. We can see that when c = 1, the model reduces to the standard sLDA, which in practice has the imbalance issue that the response variable (can be viewed as one additional word) is usually dominated by the words. This imbalance was noticed in (Halpern et al., 2012). We will see that c can make a big difference later. Comparison with MedLDA: The above formulation of logistic supervised topic models as an instance of regularized Bayesian inference provides a direct comparison with the max-margin supervised topic model (MedLDA) (Jiang et al., 2012), which has the same form of the optimization problems. The difference lies in the posterior regularization, for which MedLDA uses a hinge loss of an expected classifier while the logistic supervised topic model uses an expected log-logistic loss. Gibbs MedLDA (Zhu et al., 2013a) is another max-margin model that adopts the expected hinge loss as posterior regularization. As we shall see in the experiments, by using appropriate regularization constants, logistic supervised topic models achieve comparable performance as maxmargin methods. We note that the relationship between a logistic loss and a hinge loss has been discussed extensively in various settings (Rosasco et al., 2004; Globerson et al., 2007). But the presence of latent variables poses additional challenges in carrying out a formal theoretical analysis of these surrogate losses (Lin, 2001) in the topic model setting. 2.2 Variational Approximation Algorithms The commonly used normal prior for η is nonconjugate to the logistic likelihood, which makes the posterior inference hard. Moreover, the latent variables Z make the inference problem harder than that of Bayesian logistic regression models (Chen et al., 1999; Meyer and Laud, 2002; Polson et al., 2012). Previous algorithms to solve problem (5) rely on variational approximation techniques. It is easy to show that the variational method (Wang et al., 2009) is a coordinate descent algorithm to solve problem (5) with the additional fully-factorized constraint q(η, Θ, Z, Φ) = q(η)(∏ d q(θd) ∏ n q(zdn)) ∏ k q(Φk) and a variational approximation to the expectation of the log-logistic likelihood, which is intractable to compute directly. Note that the non-Bayesian treatment of η as unknown parameters in (Wang et al., 2009) results in an EM algorithm, which still needs to make strict mean-field assumptions together with a variational bound of the expectation of the log-logistic likelihood. In this paper, we consider the full Bayesian treatment, which can principally consider prior distributions and infer the posterior covariance. 3 A Gibbs Sampling Algorithm Now, we present a simple and efficient Gibbs sampling algorithm for the generalized Bayesian logistic supervised topic models. 189 3.1 Formulation with Data Augmentation Since the logistic pseudo-likelihood ψ(y|Z, η) is not conjugate with normal priors, it is not easy to derive the sampling algorithms directly. Instead, we develop our algorithms by introducing auxiliary variables, which lead to a scale mixture of Gaussian components and analytic conditional distributions for automatical Bayesian inference without an accept/reject ratio. Our algorithm represents a first attempt to extend Polson’s approach (Polson et al., 2012) to deal with highly non-trivial Bayesian latent variable models. Let us first introduce the Polya-Gamma variables. Definition 1 (Polson et al., 2012) A random variable X has a Polya-Gamma distribution, denoted by X ∼PG(a, b), if X = 1 2π2 ∞ ∑ i=1 gk (i −1)2/2 + b2/(4π2), where a, b > 0 and each gi ∼G(a, 1) is an independent Gamma random variable. Let ωd = η⊤¯zd. Then, using the ideas of data augmentation (Tanner and Wong, 1987; Polson et al., 2012), we can show that the generalized pseudo-likelihood can be expressed as ψ(yd|zd, η) = 1 2c eκdωd ∫∞ 0 exp ( −λdω2 d 2 ) p(λd|c, 0)dλd, where κd = c(yd−1/2) and λd is a Polya-Gamma variable with parameters a = c and b = 0. This result indicates that the posterior distribution of the generalized Bayesian logistic supervised topic models, i.e., q(η, Θ, Z, Φ), can be expressed as the marginal of a higher dimensional distribution that includes the augmented variables λ. The complete posterior distribution is q(η, λ, Θ, Z, Φ) = p0(η, Θ, Z, Φ)p(W|Z, Φ)ϕ(y, λ|Z, η) ψ(y, W) , where the pseudo-joint distribution of y and λ is ϕ(y, λ|Z, η) = ∏ d exp ( κdωd −λdω2 d 2 ) p(λd|c, 0). 3.2 Inference with Collapsed Gibbs Sampling Although we can do Gibbs sampling to infer the complete posterior distribution q(η, λ, Θ, Z, Φ) and thus q(η, Θ, Z, Φ) by ignoring λ, the mixing rate would be slow due to the large sample space. One way to effectively improve mixing rates is to integrate out the intermediate variables (Θ, Φ) and build a Markov chain whose equilibrium distribution is the marginal distribution q(η, λ, Z). We propose to use collapsed Gibbs sampling, which has been successfully used in LDA (Griffiths and Steyvers, 2004). For our model, the collapsed posterior distribution is q(η, λ, Z) ∝p0(η)p(W, Z|α, β)ϕ(y, λ|Z, η) = p0(η) K ∏ k=1 δ(Ck + β) δ(β) D ∏ d=1 [δ(Cd + α) δ(α) × exp ( κdωd −λdω2 d 2 ) p(λd|c, 0) ] , where δ(x) = ∏dim(x) i=1 Γ(xi) Γ(∑dim(x) i=1 xi), Ct k is the number of times the term t being assigned to topic k over the whole corpus and Ck = {Ct k}V t=1; Ck d is the number of times that terms being associated with topic k within the d-th document and Cd = {Ck d}K k=1. Then, the conditional distributions used in collapsed Gibbs sampling are as follows. For η: for the commonly used isotropic Gaussian prior p0(η) = ∏ k N(ηk; 0, ν2), we have q(η|Z, λ) ∝p0(η) ∏ d exp ( κdωd −λdω2 d 2 ) = N(η; µ, Σ), (8) where the posterior mean is µ = Σ(∑ d κd¯zd) and the covariance is Σ = ( 1 ν2 I +∑ d λd¯zd¯z⊤ d )−1. We can easily draw a sample from a K-dimensional multivariate Gaussian distribution. The inverse can be robustly done using Cholesky decomposition, an O(K3) procedure. Since K is normally not large, the inversion can be done efficiently. For Z: The conditional distribution of Z is q(Z|η, λ) ∝ K ∏ k=1 δ(Ck + β) δ(β) D ∏ d=1 [δ(Cd + α) δ(α) × exp ( κdωd −λdω2 d 2 )] . By canceling common factors, we can derive the local conditional of one variable zdn as: q(zk dn = 1 | Z¬, η, λ, wdn = t) ∝(Ct k,¬n + βt)(Ck d,¬n + αk) ∑ t Ct k,¬n + ∑V t=1 βt exp ( γκdηk −λd γ2η2 k + 2γ(1 −γ)ηkΛk dn 2 ) , (9) where C· ·,¬n indicates that term n is excluded from the corresponding document or topic; γ = 1 Nd ; and Λk dn = 1 Nd−1 ∑ k′ ηk′Ck′ d,¬n is the discriminant function value without word n. We can see that the first term is from the LDA model for observed word counts and the second term is from the supervising signal y. For λ: Finally, the conditional distribution of the augmented variables λ is q(λd|Z, η) ∝exp ( −λdω2 d 2 ) p(λd|c, 0) = PG ( λd; c, ωd ) , (10) 190 Algorithm 1 for collapsed Gibbs sampling 1: Initialization: set λ = 1 and randomly draw zdn from a uniform distribution. 2: for m = 1 to M do 3: draw a classifier from the distribution (8) 4: for d = 1 to D do 5: for each word n in document d do 6: draw the topic using distribution (9) 7: end for 8: draw λd from distribution (10). 9: end for 10: end for which is a Polya-Gamma distribution. The equality has been achieved by using the construction definition of the general PG(a, b) class through an exponential tilting of the PG(a, 0) density (Polson et al., 2012). To draw samples from the Polya-Gamma distribution, we adopt the efficient method2 proposed in (Polson et al., 2012), which draws the samples through drawing samples from the closely related exponentially tilted Jacobi distribution. With the above conditional distributions, we can construct a Markov chain which iteratively draws samples of η using Eq. (8), Z using Eq. (9) and λ using Eq. (10), with an initial condition. In our experiments, we initially set λ = 1 and randomly draw Z from a uniform distribution. In training, we run the Markov chain for M iterations (i.e., the burn-in stage), as outlined in Algorithm 1. Then, we draw a sample ˆη as the final classifier to make predictions on testing data. As we shall see, the Markov chain converges to stable prediction performance with a few burn-in iterations. 3.3 Prediction To apply the classifier ˆη on testing data, we need to infer their topic assignments. We take the approach in (Zhu et al., 2012; Jiang et al., 2012), which uses a point estimate of topics Φ from training data and makes prediction based on them. Specifically, we use the MAP estimate ˆΦ to replace the probability distribution p(Φ). For the Gibbs sampler, an estimate of ˆΦ using the samples is ˆϕkt ∝Ct k + βt. Then, given a testing document w, we infer its latent components z using ˆΦ as p(zn = k|z¬n) ∝ˆϕkwn(Ck ¬n + αk), where 2The basic sampler was implemented in the R package BayesLogit. We implemented the sampling algorithm in C++ together with our topic model sampler. Ck ¬n is the times that the terms in this document w assigned to topic k with the n-th term excluded. 4 Experiments We present empirical results and sensitivity analysis to demonstrate the efficiency and prediction performance3 of the generalized logistic supervised topic models on the 20Newsgroups (20NG) data set, which contains about 20,000 postings within 20 news groups. We follow the same setting as in (Zhu et al., 2012) and remove a standard list of stop words for both binary and multiclass classification. For all the experiments, we use the standard normal prior p0(η) (i.e., ν2 = 1) and the symmetric Dirichlet priors α = α K 1, β = 0.01×1, where 1 is a vector with all entries being 1. For each setting, we report the average performance and the standard deviation with five randomly initialized runs. 4.1 Binary classification Following the same setting in (Lacoste-Jullien et al., 2009; Zhu et al., 2012), the task is to distinguish postings of the newsgroup alt.atheism and those of the group talk.religion.misc. The training set contains 856 documents and the test set contains 569 documents. We compare the generalized logistic supervised LDA using Gibbs sampling (denoted by gSLDA) with various competitors, including the standard sLDA using variational mean-field methods (denoted by vSLDA) (Wang et al., 2009), the MedLDA model using variational mean-field methods (denoted by vMedLDA) (Zhu et al., 2012), and the MedLDA model using collapsed Gibbs sampling algorithms (denoted by gMedLDA) (Jiang et al., 2012). We also include the unsupervised LDA using collapsed Gibbs sampling as a baseline, denoted by gLDA. For gLDA, we learn a binary linear SVM on its topic representations using SVMLight (Joachims, 1999). The results of DiscLDA (Lacoste-Jullien et al., 2009) and linear SVM on raw bag-of-words features were reported in (Zhu et al., 2012). For gSLDA, we compare two versions – the standard sLDA with c = 1 and the sLDA with a well-tuned c value. To distinguish, we denote the latter by gSLDA+. We set c = 25 for gSLDA+, and set α = 1 and M = 100 for both gSLDA and gSLDA+. As we shall see, gSLDA is insensitive to α, 3Due to space limit, the topic visualization (similar to that of MedLDA) is deferred to a longer version. 191 5 10 15 20 25 30 0.55 0.6 0.65 0.7 0.75 0.8 0.85 # Topics Accuracy gSLDA gSLDA+ vSLDA vMedLDA gMedLDA gLDA+SVM (a) accuracy 5 10 15 20 25 30 10 −2 10 −1 10 0 10 1 10 2 10 3 # Topics Train−time (seconds) gSLDA gSLDA+ vSLDA vMedLDA gMedLDA gLDA+SVM (b) training time 5 10 15 20 25 30 0 0.5 1 1.5 2 2.5 3 3.5 4 # Topics Test−time (seconds) gSLDA gSLDA+ vSLDA gMedLDA vMedLDA gLDA+SVM (c) testing time Figure 1: Accuracy, training time (in log-scale) and testing time on the 20NG binary data set. c and M in a wide range. Fig. 1 shows the performance of different methods with various numbers of topics. For accuracy, we can draw two conclusions: 1) without making restricting assumptions on the posterior distributions, gSLDA achieves higher accuracy than vSLDA that uses strict variational mean-field approximation; and 2) by using the regularization constant c to improve the influence of supervision information, gSLDA+ achieves much better classification results, in fact comparable with those of MedLDA models since they have the similar mechanism to improve the influence of supervision by tuning a regularization constant. The fact that gLDA+SVM performs better than the standard gSLDA is due to the same reason, since the SVM part of gLDA+SVM can well capture the supervision information to learn a classifier for good prediction, while standard sLDA can’t well-balance the influence of supervision. In contrast, the well-balanced gSLDA+ model successfully outperforms the twostage approach, gLDA+SVM, by performing topic discovery and prediction jointly4. For training time, both gSLDA and gSLDA+ are very efficient, e.g., about 2 orders of magnitudes faster than vSLDA and about 1 order of magnitude faster than vMedLDA. For testing time, gSLDA and gSLDA+ are comparable with gMedLDA and the unsupervised gLDA, but faster than the variational vMedLDA and vSLDA, especially when K is large. 4.2 Multi-class classification We perform multi-class classification on the 20NG data set with all the 20 categories. For multiclass classification, one possible extension is to use a multinomial logistic regression model for categorical variables Y by using topic representations ¯z as input features. However, it is non4The variational sLDA with a well-tuned c is significantly better than the standard sLDA, but a bit inferior to gSLDA+. trivial to develop a Gibbs sampling algorithm using the similar data augmentation idea, due to the presence of latent variables and the nonlinearity of the soft-max function. In fact, this is harder than the multinomial Bayesian logistic regression, which can be done via a coordinate strategy (Polson et al., 2012). Here, we apply the binary gSLDA to do the multi-class classification, following the “one-vs-all” strategy, which has been shown effective (Rifkin and Klautau, 2004), to provide some preliminary analysis. Namely, we learn 20 binary gSLDA models and aggregate their predictions by taking the most likely ones as the final predictions. We again evaluate two versions of gSLDA – the standard gSLDA with c = 1 and the improved gSLDA+ with a well-tuned c value. Since gSLDA is also insensitive to α and c for the multi-class task, we set α = 5.6 for both gSLDA and gSLDA+, and set c = 256 for gSLDA+. The number of burn-in is set as M = 40, which is sufficiently large to get stable results, as we shall see. Fig. 2 shows the accuracy and training time. We can see that: 1) by using Gibbs sampling without restricting assumptions, gSLDA performs better than the variational vSLDA that uses strict meanfield approximation; 2) due to the imbalance between the single supervision and a large set of word counts, gSLDA doesn’t outperform the decoupled approach, gLDA+SVM; and 3) if we increase the value of the regularization constant c, supervision information can be better captured to infer predictive topic representations, and gSLDA+ performs much better than gSLDA. In fact, gSLDA+ is even better than the MedLDA that uses mean-field approximation, while is comparable with the MedLDA using collapsed Gibbs sampling. Finally, we should note that the improvement on the accuracy might be due to the different strategies on building the multi-class classifiers. But given the performance gain in the binary task, we believe that the Gibbs sampling algorith192 20 30 40 50 60 70 80 90 100 110 0.55 0.6 0.65 0.7 0.75 0.8 # Topics Accuracy gSLDA gSLDA+ vSLDA vMedLDA gMedLDA gLDA+SVM (a) accuracy 20 30 40 50 60 70 80 90 100 110 10 −1 10 0 10 1 10 2 10 3 10 4 10 5 # Topics Train−time (seconds) gSLDA gSLDA+ vSLDA vMedLDA gMedLDA gLDA+SVM parallel−gSLDA parallel−gSLDA+ (b) training time Figure 2: Multi-class classification. Table 1: Split of training time over various steps. SAMPLE λ SAMPLE η SAMPLE Z K=20 2841.67 (65.80%) 7.70 (0.18%) 1455.25 (34.02%) K=30 2417.95 (56.10%) 10.34 (0.24%) 1888.78 (43.66%) K=40 2393.77 (49.00%) 14.66 (0.30%) 2476.82 (50.70%) K=50 2161.09 (43.67%) 16.33 (0.33%) 2771.26 (56.00%) m without factorization assumptions is the main factor for the improved performance. For training time, gSLDA models are about 10 times faster than variational vSLDA. Table 1 shows in detail the percentages of the training time (see the numbers in brackets) spent at each sampling step for gSLDA+. We can see that: 1) sampling the global variables η is very efficient, while sampling local variables (λ, Z) are much more expensive; and 2) sampling λ is relatively stable as K increases, while sampling Z takes more time as K becomes larger. But, the good news is that our Gibbs sampling algorithm can be easily parallelized to speedup the sampling of local variables, following the similar architectures as in LDA. A Parallel Implementation: GraphLab is a graph-based programming framework for parallel computing (Gonzalez et al., 2012). It provides a high-level abstraction of parallel tasks by expressing data dependencies with a distributed graph. GraphLab implements a GAS (gather, apply, scatter) model, where the data required to compute a vertex (edge) are gathered along its neighboring components, and modification of a vertex (edge) will trigger its adjacent components to recompute their values. Since GAS has been successfully applied to several machine learning algorithms5 including Gibbs sampling of LDA, we choose it as a preliminary attempt to parallelize our Gibbs sampling algorithm. A systematical investigation of the parallel computation with various architectures in interesting, but beyond the scope of this paper. For our task, since there is no coupling among the 20 binary gSLDA classifiers, we can learn them in parallel. This suggests an efficient hybrid multi-core/multi-machine implementation, which 5http://docs.graphlab.org/toolkits.html can avoid the time consumption of IPC (i.e., interprocess communication). Namely, we run our experiments on a cluster with 20 nodes where each node is equipped with two 6-core CPUs (2.93GHz). Each node is responsible for learning one binary gSLDA classifier with a parallel implementation on its 12-cores. For each binary gSLDA model, we construct a bipartite graph connecting train documents with corresponding terms. The graph works as follows: 1) the edges contain the token counts and topic assignments; 2) the vertices contain individual topic counts and the augmented variables λ; 3) the global topic counts and η are aggregated from the vertices periodically, and the topic assignments and λ are sampled asynchronously during the GAS phases. Once started, sampling and signaling will propagate over the graph. One thing to note is that since we cannot directly measure the number of iterations of an asynchronous model, here we estimate it with the total number of topic samplings, which is again aggregated periodically, divided by the number of tokens. We denote the parallel models by parallelgSLDA (c = 1) and parallel-gSLDA+ (c = 256). From Fig. 2 (b), we can see that the parallel gSLDA models are about 2 orders of magnitudes faster than their sequential counterpart models, which is very promising. Also, the prediction performance is not sacrificed as we shall see in Fig. 4. 4.3 Sensitivity analysis Burn-In: Fig. 3 shows the performance of gSLDA+ with different burn-in steps for binary classification. When M = 0 (see the most left points), the models are built on random topic assignments. We can see that the classification performance increases fast and converges to the stable optimum with about 20 burn-in steps. The training time increases about linearly in general when using more burn-in steps. Moreover, the training time increases linearly as K increases. In the previous experiments, we set M = 100. Fig. 4 shows the performance of gSLDA+ and its parallel implementation (i.e., parallelgSLDA+) for the multi-class classification with different burn-in steps. We can see when the number of burn-in steps is larger than 20, the performance of gSLDA+ is quite stable. Again, in the log-log scale, since the slopes of the lines in Fig. 4 (b) are close to the constant 1, the training time grows about linearly as the number of 193 10 0 10 1 10 2 10 3 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 burn−in iterations Accuracy K = 5 K = 10 K=20 train accuracy test accuracy (a) accuracy 0 100 200 300 400 500 0 5 10 15 20 25 30 35 burn−in iterations Train−time (seconds) K = 5 K = 10 K=20 (b) training time Figure 3: Performance of gSLDA+ with different burn-in steps for binary classification. The most left points are for the settings with no burn in. 10 −1 10 0 10 1 10 2 10 3 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 burn−in iterations Accuracy K = 20 K = 30 K = 40 K = 50 gSLDA+ parallel−gSLDA+ (a) accuracy 10 −1 10 0 10 1 10 2 10 3 10 0 10 1 10 2 10 3 10 4 10 5 burn−in iterations Train−time (sec) K = 20 K = 30 K = 40 K = 50 parallel−gSLDA+ gSLDA+ (b) training time Figure 4: Performance of gSLDA+ and parallelgSLDA+ with different burn-in steps for multiclass classification. The most left points are for the settings with no burn in. burn-in steps increases. Even when we use 40 or 60 burn-in steps, the training time is still competitive, compared with the variational vSLDA. For parallel-gSLDA+ using GraphLab, the training is consistently about 2 orders of magnitudes faster. Meanwhile, the classification performance is also comparable with that of gSLDA+, when the number of burn-in steps is larger than 40. In the previous experiments, we have set M = 40 for both gSLDA+ and parallel-gSLDA+. Regularization constant c: Fig. 5 shows the performance of gSLDA in the binary classification task with different c values. We can see that in a wide range, e.g., from 9 to 100, the performance is quite stable for all the three K values. But for the standard sLDA model, i.e., c = 1, both the training accuracy and test accuracy are low, which indicates that sLDA doesn’t fit the supervision data well. When c becomes larger, the training accuracy gets higher, but it doesn’t seem to over-fit and the generalization performance is stable. In the above experiments, we set c = 25. For multiclass classification, we have similar observations and set c = 256 in the previous experiments. Dirichlet prior α: Fig. 6 shows the performance of gSLDA on the binary task with different α values. We report two cases with c = 1 and c = 9. We can see that the performance is quite stable in a wide range of α values, e.g., from 0.1 1 2 3 4 6 7 8 9 10 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 √c Accuracy K = 5 K = 10 K = 20 train accuracy test accuracy (a) accuracy 1 2 3 4 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 11 √c Train−time (seconds) K = 5 K = 10 K = 20 (b) training time Figure 5: Performance of gSLDA for binary classification with different c values. 10 −4 10 −2 10 0 10 2 10 4 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 α Accuracy K = 5 K = 10 K = 15 K=20 (a) c = 1 10 −6 10 −4 10 −2 10 0 10 2 10 4 0.55 0.6 0.65 0.7 0.75 0.8 0.85 α Accuracy K = 5 K = 10 K = 15 K=20 (b) c = 9 Figure 6: Accuracy of gSLDA for binary classification with different α values in two settings with c = 1 and c = 9. to 10. We also noted that the change of α does not affect the training time much. 5 Conclusions and Discussions We present two improvements to Bayesian logistic supervised topic models, namely, a general formulation by introducing a regularization parameter to avoid model imbalance and a highly efficient Gibbs sampling algorithm without restricting assumptions on the posterior distributions by exploring the idea of data augmentation. The algorithm can also be parallelized. Empirical results for both binary and multi-class classification demonstrate significant improvements over the existing logistic supervised topic models. Our preliminary results with GraphLab have shown promise on parallelizing the Gibbs sampling algorithm. For future work, we plan to carry out more careful investigations, e.g., using various distributed architectures (Ahmed et al., 2012; Newman et al., 2009; Smola and Narayanamurthy, 2010), to make the sampling algorithm highly scalable to deal with massive data corpora. Moreover, the data augmentation technique can be applied to deal with other types of response variables, such as count data with a negative-binomial likelihood (Polson et al., 2012). Acknowledgments This work is supported by National Key Foundation R&D Projects (No.s 2013CB329403, 194 2012CB316301), Tsinghua Initiative Scientific Research Program No.20121088071, Tsinghua National Laboratory for Information Science and Technology, and the 221 Basic Research Plan for Young Faculties at Tsinghua University. References A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. Smola. 2012. Scalable inference in latent variable models. In International Conference on Web Search and Data Mining (WSDM). D.M. Blei and J.D. McAuliffe. 2010. Supervised topic models. arXiv:1003.0783v1. D.M. Blei, A.Y. Ng, and M.I. Jordan. 2003. Latent Dirichlet allocation. JMLR, 3:993–1022. M. Chen, J. Ibrahim, and C. Yiannoutsos. 1999. Prior elicitation, variable selection and Bayesian computation for logistic regression models. Journal of Royal Statistical Society, Ser. B, (61):223–242. P. Germain, A. Lacasse, F. Laviolette, and M. Marchand. 2009. PAC-Bayesian learning of linear classifiers. In International Conference on Machine Learning (ICML), pages 353–360. A. Globerson, T. Koo, X. Carreras, and M. Collins. 2007. Exponentiated gradient algorithms for loglinear structured prediction. In ICML, pages 305– 312. J.E. Gonzalez, Y. Low, H. Gu, D. Bickson, and C. Guestrin. 2012. Powergraph: Distributed graphparallel computation on natural graphs. In the 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI). T.L. Griffiths and M. Steyvers. 2004. Finding scientific topics. Proceedings of National Academy of Science (PNAS), pages 5228–5235. Y. Halpern, S. Horng, L. Nathanson, N. Shapiro, and D. Sontag. 2012. A comparison of dimensionality reduction techniques for unstructured clinical text. In ICML 2012 Workshop on Clinical Data Analysis. C. Holmes and L. Held. 2006. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis, 1(1):145–168. Q. Jiang, J. Zhu, M. Sun, and E.P. Xing. 2012. Monte Carlo methods for maximum margin supervised topic models. In Advances in Neural Information Processing Systems (NIPS). T. Joachims. 1999. Making large-scale SVM learning practical. MIT press. S. Lacoste-Jullien, F. Sha, and M.I. Jordan. 2009. DiscLDA: Discriminative learning for dimensionality reduction and classification. Advances in Neural Information Processing Systems (NIPS), pages 897– 904. Y. Lin. 2001. A note on margin-based loss functions in classification. Technical Report No. 1044. University of Wisconsin. D. McAllester. 2003. PAC-Bayesian stochastic model selection. Machine Learning, 51:5–21. M. Meyer and P. Laud. 2002. Predictive variable selection in generalized linear models. Journal of American Statistical Association, 97(459):859–871. D. Newman, A. Asuncion, P. Smyth, and M. Welling. 2009. Distributed algorithms for topic models. Journal of Machine Learning Research (JMLR), (10):1801–1828. N.G. Polson, J.G. Scott, and J. Windle. 2012. Bayesian inference for logistic models using Polya-Gamma latent variables. arXiv:1205.0310v1. R. Rifkin and A. Klautau. 2004. In defense of onevs-all classification. Journal of Machine Learning Research (JMLR), (5):101–141. L. Rosasco, E. De Vito, A. Caponnetto, M. Piana, and A. Verri. 2004. Are loss functions all the same? Neural Computation, (16):1063–1076. A. Smola and S. Narayanamurthy. 2010. An architecture for parallel topic models. Very Large Data Base (VLDB), 3(1-2):703–710. M.A. Tanner and W.-H. Wong. 1987. The calculation of posterior distributions by data augmentation. Journal of the Americal Statistical Association (JASA), 82(398):528–540. D. van Dyk and X. Meng. 2001. The art of data augmentation. Journal of Computational and Graphical Statistics (JCGS), 10(1):1–50. C. Wang, D.M. Blei, and Li F.F. 2009. Simultaneous image classification and annotation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Zhu, N. Chen, and E.P. Xing. 2011. Infinite latent SVM for classification and multi-task learning. In Advances in Neural Information Processing Systems (NIPS), pages 1620–1628. J. Zhu, A. Ahmed, and E.P. Xing. 2012. MedLDA: maximum margin supervised topic models. Journal of Machine Learning Research (JMLR), (13):2237– 2278. J. Zhu, N. Chen, H. Perkins, and B. Zhang. 2013a. Gibbs max-margin topic models with fast sampling algorithms. In International Conference on Machine Learning (ICML). J. Zhu, N. Chen, and E.P. Xing. 2013b. Bayesian inference with posterior regularization and applications to infinite latent svms. arXiv:1210.1766v2. 195
2013
19
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 11–21, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Integrating Translation Memory into Phrase-Based Machine Translation during Decoding Kun Wang† Chengqing Zong† Keh-Yih Su‡ †National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China ‡Behavior Design Corporation, Taiwan †{kunwang, cqzong}@nlpr.ia.ac.cn, ‡[email protected] Abstract Since statistical machine translation (SMT) and translation memory (TM) complement each other in matched and unmatched regions, integrated models are proposed in this paper to incorporate TM information into phrase-based SMT. Unlike previous multi-stage pipeline approaches, which directly merge TM result into the final output, the proposed models refer to the corresponding TM information associated with each phrase at SMT decoding. On a Chinese–English TM database, our experiments show that the proposed integrated Model-III is significantly better than either the SMT or the TM systems when the fuzzy match score is above 0.4. Furthermore, integrated Model-III achieves overall 3.48 BLEU points improvement and 2.62 TER points reduction in comparison with the pure SMT system. Besides, the proposed models also outperform previous approaches significantly. 1 Introduction Statistical machine translation (SMT), especially the phrase-based model (Koehn et al., 2003), has developed very fast in the last decade. For certain language pairs and special applications, SMT output has reached an acceptable level, especially in the domains where abundant parallel corpora are available (He et al., 2010). However, SMT is rarely applied to professional translation because its output quality is still far from satisfactory. Especially, there is no guarantee that a SMT system can produce translations in a consistent manner (Ma et al., 2011). In contrast, translation memory (TM), which uses the most similar translation sentence (usually above a certain fuzzy match threshold) in the database as the reference for post-editing, has been widely adopted in professional translation field for many years (Lagoudaki, 2006). TM is very useful for repetitive material such as updated product manuals, and can give high quality and consistent translations when the similarity of fuzzy match is high. Therefore, professional translators trust TM much more than SMT. However, high-similarity fuzzy matches are available unless the material is very repetitive. In general, for those matched segments1, TM provides more reliable results than SMT does. One reason is that the results of TM have been revised by human according to the global context, but SMT only utilizes local context. However, for those unmatched segments, SMT is more reliable. Since TM and SMT complement each other in those matched and unmatched segments, the output quality is expected to be raised significantly if they can be combined to supplement each other. In recent years, some previous works have incorporated TM matched segments into SMT in a pipelined manner (Koehn and Senellart, 2010; Zhechev and van Genabith, 2010; He et al., 2011; Ma et al., 2011). All these pipeline approaches translate the sentence in two stages. They first determine whether the extracted TM sentence pair should be adopted or not. Most of them use fuzzy match score as the threshold, but He et al. (2011) and Ma et al. (2011) use a classifier to make the judgment. Afterwards, they merge the relevant translations of matched segments into the source sentence, and then force the SMT system to only translate those unmatched segments at decoding. There are three obvious drawbacks for the above pipeline approaches. Firstly, all of them determine whether those matched segments 1 We mean “sub-sentential segments” in this work. 11 should be adopted or not at sentence level. That is, they are either all adopted or all abandoned regardless of their individual quality. Secondly, as several TM target phrases might be available for one given TM source phrase due to insertions, the incorrect selection made in the merging stage cannot be remedied in the following translation stage. For example, there are six possible corresponding TM target phrases for the given TM source phrase “关联4 的5 对象6” (as shown in Figure 1) such as “object2 that3 is4 associated5”, and “an1 object2 that3 is4 associated5 with6”, etc. And it is hard to tell which one should be adopted in the merging stage. Thirdly, the pipeline approach does not utilize the SMT probabilistic information in deciding whether a matched TM phrase should be adopted or not, and which target phrase should be selected when we have multiple candidates. Therefore, the possible improvements resulted from those pipeline approaches are quite limited. On the other hand, instead of directly merging TM matched phrases into the source sentence, some approaches (Biçici and Dymetman, 2008; Simard and Isabelle, 2009) simply add the longest matched pairs into SMT phrase table, and then associate them with a fixed large probability value to favor the corresponding TM target phrase at SMT decoding. However, since only one aligned target phrase will be added for each matched source phrase, they share most drawbacks with the pipeline approaches mentioned above and merely achieve similar performance. To avoid the drawbacks of the pipeline approach (mainly due to making a hard decision before decoding), we propose several integrated models to completely make use of TM information during decoding. For each TM source phrase, we keep all its possible corresponding target phrases (instead of keeping only one of them). The integrated models then consider all corresponding TM target phrases and SMT preference during decoding. Therefore, the proposed integrated models combine SMT and TM at a deep level (versus the surface level at which TM result is directly plugged in under previous pipeline approaches). On a Chinese–English computer technical documents TM database, our experiments have shown that the proposed Model-III improves the translation quality significantly over either the pure phrase-based SMT or the TM systems when the fuzzy match score is above 0.4. Compared with the pure SMT system, the proposed integrated Model-III achieves 3.48 BLEU points improvement and 2.62 TER points reduction overall. Furthermore, the proposed models significantly outperform previous pipeline approaches. 2 Problem Formulation Compared with the standard phrase-based machine translation model, the translation problem is reformulated as follows (only based on the best TM, however, it is similar for multiple TM sentences): (1) Where is the given source sentence to be translated, is the corresponding target sentence and is the final translation; are the associated information of the best TM sentence-pair; and denote the corresponding TM sentence pair; denotes its associated fuzzy match score (from 0.0 to 1.0); is the editing operations between and ; and denotes the word alignment between and . Let and denote the k-th associated source phrase and target phrase, respectively. Also, and denote the associated source phrase sequence and the target phrase sequence, respectively (total phrases without insertion). Then the above formula (1) can be decomposed as below: (2) Afterwards, for any given source phrase , we can find its corresponding TM source phrase and all possible TM target phrases (each of them is denoted by ) with the help of corresponding editing operations and word alignment . As mentioned above, we can have six different possible TM target phrases for the TM source phrase “关联4 的5 对象6”. This 获取0 与1 批注2 标签3 关联4 的5 对象6 。7 获取0 或1 设置2 与3 批注4 关联5 的6 对象7 。8 gets0 an1 object2 that3 is4 associated5 with6 the7 annotation8 label9 .10 Source TM Source TM Target Figure 1: Phrase Mapping Example 12 is because there are insertions around the directly aligned TM target phrase. In the above Equation (2), we first segment the given source sentence into various phrases, and then translate the sentence based on those source phrases. Also, is replaced by , as they are actually the same segmentation sequence. Assume that the segmentation probability is a uniform distribution, with the corresponding TM source and target phrases obtained above, this problem can be further simplified as follows: (3) Where is the corresponding TM phrase matching status for , which is a vector consisting of various indicators (e.g., Target Phrase Content Matching Status, etc., to be defined later), and reflects the quality of the given candidate; is the linking status vector of (the aligned source phrase of within ), and indicates the matching and linking status in the source side (which is closely related to the status in the target side); also, indicates the corresponding TM fuzzy match interval specified later. In the second line of Equation (3), we convert the fuzzy match score into its corresponding interval , and incorporate all possible combinations of TM target phrases. Afterwards, we select the best one in the third line. Last, in the fourth line, we introduce the source matching status and the target linking status (detailed features would be defined later). Since we might have several possible TM target phrases , the one with the maximum score will be adopted during decoding. The first factor in the above formula (3) is just the typical phrase-based SMT model, and the second factor (to be specified in the Section 3) is the information derived from the TM sentence pair. Therefore, we can still keep the original phrase-based SMT model and only pay attention to how to extract useful information from the best TM sentence pair to guide SMT decoding. 3 Proposed Models Three integrated models are proposed to incorporate different features as follows: 3.1 Model-I In this simplest model, we only consider Target Phrase Content Matching Status (TCM) for . For , we consider four different features at the same time: Source Phrase Content Matching Status (SCM), Number of Linking Neighbors (NLN), Source Phrase Length (SPL), and Sentence End Punctuation Indicator (SEP). Those features will be defined below. is then specified as: All features incorporated in this model are specified as follows: TM Fuzzy Match Interval (z): The fuzzy match score (FMS) between source sentence and TM source sentence indicates the reliability of the given TM sentence, and is defined as (Sikes, 2007): Where is the word-based Levenshtein Distance (Levenshtein, 1966) between and . We equally divide FMS into ten fuzzy match intervals such as: [0.9, 1.0), [0.8, 0.9) etc., and the index specifies the corresponding interval. For example, since the fuzzy match score between and in Figure 1 is 0.667, then . Target Phrase Content Matching Status (TCM): It indicates the content matching status between and , and reflects the quality of . Because is nearly perfect when FMS is high, if the similarity between and is high, it implies that the given is possibly a good candidate. It is a member of {Same, High, Low, NA (Not-Applicable)}, and is specified as: (1) If is not null: (a) if , ; (b) else if , ; (c) else, ; (2) If is null, ; Here is null means that either there is no corresponding TM source phrase or there is no corresponding TM target phrase 13 aligned with . In the example of Figure 1, assume that the given is “关联 5 的6 对象7” and is “object that is associated”. If is “object2 that3 is4 associated5”, ; if is “an1 object2 that3 is4 associated5”, . Source Phrase Content Matching Status (SCM): Which indicates the content matching status between and , and it affects the matching status of and greatly. The more similar is to , the more similar is to . It is a member of {Same, High, Low, NA} and is defined as: (1) If is not null: (a) if , ; (b) else if , ; (c) else, ; (2) If is null, ; Here is null means that there is no corresponding TM source phrase for the given source phrase . Take the source phrase “关联5 的6 对象7” in Figure 1 for an example, since its corresponding is “关联4 的5 对象6”, then . Number of Linking Neighbors (NLN): Usually, the context of a source phrase would affect its target translation. The more similar the context are, the more likely that the translations are the same. Therefore, this NLN feature reflects the number of matched neighbors (words) and it is a vector of <x, y>. Where “x” denotes the number of matched source neighbors; and “y” denotes how many those neighbors are also linked to target words (not null), which also affects the TM target phrase selection. This feature is a member of {<x, y>: <2, 2>, <2, 1>, <2, 0>, <1, 1>, <1, 0>, <0, 0>}. For the source phrase “关联5 的6 对象 7” in Figure 1, the corresponding TM source phrase is “关联4 的5 对象6” . As only their right neighbors “。8” and “。7” are matched, and “。7” is aligned with “.10”, NLN will be <1, 1>. Source Phrase Length (SPL): Usually the longer the source phrase is, the more reliable the TM target phrase is. For example, the corresponding for the source phrase with 5 words would be more reliable than that with only one word. This feature denotes the number of words included in , and is a member of {1, 2, 3, 4, ≥5}. For the case “关联5 的6 对象7”, SPL will be 3. Sentence End Punctuation Indicator (SEP): Which indicates whether the current phrase is a punctuation at the end of the sentence, and is a member of {Yes, No}. For example, the SEP for “关联5 的6 对象7” will be “No”. It is introduced because the SCM and TCM for a sentence-end-punctuation are always “Same” regardless of other features. Therefore, it is used to distinguish this special case from other cases. 3.2 Model-II As Model-I ignores the relationship among various possible TM target phrases, we add two features TM Candidate Set Status (CSS) and Longest TM Candidate Indicator (LTC) to incorporate this relationship among them. Since CSS is redundant after LTC is known, we thus ignore it for evaluating TCM probability in the following derivation: The two new features CSS and LTC adopted in Model-II are defined as follows: TM Candidate Set Status (CSS): Which restricts the possible status of , and is a member of {Single, Left-Ext, Right-Ext, Both-Ext, NA}. Where “Single” means that there is only one candidate for the given source phrase ; “Left-Ext” means that there are multiple candidates, and all the candidates are generated by extending only the left boundary; “Right-Ext” means that there are multiple candidates, and all the candidates are generated by only extending to the right; “Both-Ext” means that there are multiple candidates, and the candidates are generated by extending to both sides; “NA” means that is null. For “关联 4 的 5 对象 6” in Figure 1, the linked TM target phrase is “object2 that3 is4 associated5”, and there are 5 other candidates by extending to both sides. Therefore, . Longest TM Candidate Indicator (LTC): Which indicates whether the given is the longest candidate or not, and is a member of {Original, Left-Longest, Right-Longest, BothLongest, Medium, NA}. Where “Original” means that the given is the one without extension; “Left-Longest” means that the given 14 is only extended to the left and is the longest one; “Right-Longest” means that the given is only extended to the right and is the longest one; “Both-Longest” means that the given is extended to both sides and is the longest one; “Medium” means that the given has been extended but not the longest one; “NA” means that is null. For “object2 that3 is4 associated5” in Figure 1, ; for “an1 object2 that3 is4 associated5”, ; for the longest “an1 object2 that3 is4 associated5 with6 the7”, . 3.3 Model-III The abovementioned integrated models ignore the reordering information implied by TM. Therefore, we add a new feature Target Phrase Adjacent Candidate Relative Position Matching Status (CPM) into Model-II and Model-III is given as: We assume that CPM is independent with SPL and SEP, because the length of source phrase would not affect reordering too much and SEP is used to distinguish the sentence end punctuation with other phrases. The new feature CPM adopted in Model-III is defined as: Target Phrase Adjacent Candidate Relative Position Matching Status (CPM): Which indicates the matching status between the relative position of and the relative position of . It checks if are positioned in the same order with , and reflects the quality of ordering the given target candidate . It is a member of {Adjacent-Same, Adjacent-Substitute, Linked-Interleaved, Linked-Cross, LinkedReversed, Skip-Forward, Skip-Cross, SkipReversed, NA}. Recall that is always right adjacent to , then various cases are defined as follows: (1) If both and are not null: (a) If is on the right of and they are also adjacent to each other: i. If the right boundary words of and are the same, and the left boundary words of and are the same, ; ii. Otherwise, ; (b) If is on the right of but they are not adjacent to each other, ; (c) If is not on the right of : i. If there are cross parts between and , ; ii. Otherwise, ; (2) If is null but is not null, then find the first which is not null ( starts from 2)2: (a) If is on the right of , ; (b) If is not on the right of : i. If there are cross parts between and , ; ii. Otherwise, . (3) If is null, . In Figure 1, assume that , and are “gets an”, “object that is associated with” and “gets0 an1”, respectively. For “object2 that3 is4 associated5”, because is on the right of and they are adjacent pair, and both boundary words (“an” and “an1”; “object” and “object2”) are matched, ; for “an1 object2 that3 is4 associated5”, because there are cross parts “an1” between and , . On the other hand, assume that , and are “gets”, “object that is associated with” and “gets0”, respectively. For “an1 object2 that3 is4 associated5”, because and are adjacent pair, but the left boundary words of and (“object” and “an1”) are not matched, ; for “object2 that3 is4 associated5”, because is on the right of but they are not adjacent pair, therefore, . One more example, assume that , and are “the annotation label”, “object that is associated with” and “the7 annotation8 label9”, respectively. For “an1 object2 that3 is4 associated5”, because is on the left of , and there are no cross parts, . 2 It can be identified by simply memorizing the index of nearest non-null during search. 15 4 Experiments 4.1 Experimental Setup Our TM database consists of computer domain Chinese-English translation sentence-pairs, which contains about 267k sentence-pairs. The average length of Chinese sentences is 13.85 words and that of English sentences is 13.86 words. We randomly selected a development set and a test set, and then the remaining sentence pairs are for training set. The detailed corpus statistics are shown in Table 1. Furthermore, development set and test set are divided into various intervals according to their best fuzzy match scores. Corpus statistics for each interval in the test set are shown in Table 2. For the phrase-based SMT system, we adopted the Moses toolkit (Koehn et al., 2007). The system configurations are as follows: GIZA++ (Och and Ney, 2003) is used to obtain the bidirectional word alignments. Afterwards, “intersection” 3 refinement (Koehn et al., 2003) is adopted to extract phrase-pairs. We use the SRI Language Model toolkit (Stolcke, 2002) to train a 5-gram model with modified Kneser-Ney smoothing (Kneser and Ney, 1995; Chen and Goodman, 1998) on the target-side (English) training corpus. All the feature weights and the weight for each probability factor (3 factors for Model-III) are tuned on the development set with minimumerror-rate training (MERT) (Och, 2003). The maximum phrase length is set to 7 in our experiments. In this work, the translation performance is measured with case-insensitive BLEU-4 score (Papineni et al., 2002) and TER score (Snover et al., 2006). Statistical significance test is conducted with re-sampling (1,000 times) approach (Koehn, 2004) in 95% confidence level. 4.2 Cross-Fold Translation To estimate the probabilities of proposed models, the corresponding phrase segmentations for bilingual sentences are required. As we want to check what actually happened during decoding in the real situation, cross-fold translation is used to obtain the corresponding phrase segmentations. We first extract 95% of the bilingual sentences as a new training corpus to train a SMT system. Afterwards, we generate the corresponding phrase segmentations for the remaining 5% bi 3 “grow-diag-final” and “grow-diag-final-and” are also tested. However, “intersection” is the best option in our experiments, especially for those high fuzzy match intervals. lingual sentences with Forced Decoding (Li et al., 2000; Zollmann et al., 2008; Auli et al., 2009; Wisniewski et al., 2010), which searches the best phrase segmentation for the specified output. Having repeated the above steps 20 times4, we obtain the corresponding phrase segmentations for the SMT training data (which will then be used to train the integrated models). Due to OOV words and insertion words, not all given source sentences can generate the desired results through forced decoding. Fortunately, in our work, 71.7% of the training bilingual sentences can generate the corresponding target results. The remaining 28.3% of the sentence pairs are thus not adopted for generating training samples. Furthermore, more than 90% obtained source phrases are observed to be less than 5 words, which explains why five different quantization levels are adopted for Source Phrase Length (SPL) in section 3.1. 4.3 Translation Results After obtaining all the training samples via crossfold translation, we use Factored Language Model toolkit (Kirchhoff et al., 2007) to estimate the probabilities of integrated models with Witten-Bell smoothing (Bell et al., 1990; Witten et al., 1991) and Back-off method. Afterwards, we incorporate the TM information for each phrase at decoding. All experiments are 4 This training process only took about 10 hours on our Ubuntu server (Intel 4-core Xeon 3.47GHz, 132 GB of RAM). Train Develop Test #Sentences 261,906 2,569 2,576 #Chn. Words 3,623,516 38,585 38,648 #Chn. VOC. 43,112 3,287 3,460 #Eng. Words 3,627,028 38,329 38,510 #Eng. VOC. 44,221 3,993 4,046 Table 1: Corpus Statistics Intervals #Sentences #Words W/S [0.9, 1.0) 269 4,468 16.6 [0.8, 0.9) 362 5,004 13.8 [0.7, 0.8) 290 4,046 14.0 [0.6, 0.7) 379 4,998 13.2 [0.5, 0.6) 472 6,073 12.9 [0.4, 0.5) 401 5,921 14.8 [0.3, 0.4) 305 5,499 18.0 (0.0, 0.3) 98 2,639 26.9 (0.0, 1.0) 2,576 38,648 15.0 Table 2: Corpus Statistics for Test-Set 16 Intervals TM SMT Model-I Model-II Model-III Koehn-10 Ma-11 Ma-11-U [0.9, 1.0) 81.31 81.38 85.44 * 86.47 *# 89.41 *# 82.79 77.72 82.78 [0.8, 0.9) 73.25 76.16 79.97 * 80.89 * 84.04 *# 79.74 * 73.00 77.66 [0.7, 0.8) 63.62 67.71 71.65 * 72.39 * 74.73 *# 71.02 * 66.54 69.78 [0.6, 0.7) 43.64 54.56 54.88 # 55.88 *# 57.53 *# 53.06 54.00 56.37 [0.5, 0.6) 27.37 46.32 47.32 *# 47.45 *# 47.54 *# 39.31 46.06 47.73 [0.4, 0.5) 15.43 37.18 37.25 # 37.60 # 38.18 *# 28.99 36.23 37.93 [0.3, 0.4) 8.24 29.27 29.52 # 29.38 # 29.15 # 23.58 29.40 30.20 (0.0, 0.3) 4.13 26.38 25.61 # 25.32 # 25.57 # 18.56 26.30 26.92 (0.0, 1.0) 40.17 53.03 54.57 *# 55.10 *# 56.51 *# 50.31 51.98 54.32 Table 3: Translation Results (BLEU%). Scores marked by “*” are significantly better (p < 0.05) than both TM and SMT systems, and those marked by “#” are significantly better (p < 0.05) than Koehn-10. Intervals TM SMT Model-I Model-II Model-III Koehn-10 Ma-11 Ma-11-U [0.9, 1.0) 9.79 13.01 9.22 # 8.52 *# 6.77 *# 13.01 18.80 11.90 [0.8, 0.9) 16.21 16.07 13.12 *# 12.74 *# 10.75 *# 15.27 20.60 14.74 [0.7, 0.8) 27.79 22.80 19.10 *# 18.58 *# 17.11 *# 21.85 25.33 21.11 [0.6, 0.7) 46.40 33.38 32.63 # 32.27 *# 29.96 *# 35.93 35.24 31.76 [0.5, 0.6) 62.59 39.56 38.24 *# 38.77 *# 38.74 *# 47.37 40.24 38.01 [0.4, 0.5) 73.93 47.19 47.03 # 46.34 *# 46.00 *# 56.84 48.74 46.10 [0.3, 0.4) 79.86 55.71 55.38 # 55.44 # 55.87 # 64.55 55.93 54.15 (0.0, 0.3) 85.31 61.76 62.38 # 63.66 # 63.51 # 73.30 63.00 60.67 (0.0, 1.0) 50.51 35.88 34.34 *# 34.18 *# 33.26 *# 40.75 38.10 34.49 Table 4: Translation Results (TER%). Scores marked by “*” are significantly better (p < 0.05) than both TM and SMT systems, and those marked by “#” are significantly better (p < 0.05) than Koehn-10. conducted using the Moses phrase-based decoder (Koehn et al., 2007). Table 3 and 4 give the translation results of TM, SMT, and three integrated models in the test set. In the tables, the best translation results (either in BLEU or TER) at each interval have been marked in bold. Scores marked by “*” are significantly better (p < 0.05) than both the TM and the SMT systems. It can be seen that TM significantly exceeds SMT at the interval [0.9, 1.0) in TER score, which illustrates why professional translators prefer TM rather than SMT as their assistant tool. Compared with TM and SMT, Model-I is significantly better than the SMT system in either BLEU or TER when the fuzzy match score is above 0.7; Model-II significantly outperforms both the TM and the SMT systems in either BLEU or TER when the fuzzy match score is above 0.5; Model-III significantly exceeds both the TM and the SMT systems in either BLEU or TER when the fuzzy match score is above 0.4. All these improvements show that our integrated models have combined the strength of both TM and SMT. However, the improvements from integrated models get less when the fuzzy match score decreases. For example, Model-III outperforms SMT 8.03 BLEU points at interval [0.9, 1.0), while the advantage is only 2.97 BLEU points at interval [0.6, 0.7). This is because lower fuzzy match score means that there are more unmatched parts between and ; the output of TM is thus less reliable. Across all intervals (the last row in the table), Model-III not only achieves the best BLEU score (56.51), but also gets the best TER score (33.26). If intervals are evaluated separately, when the fuzzy match score is above 0.4, Model-III outperforms both Model-II and Model-I in either BLEU or TER. Model-II also exceeds Model-I in either BLEU or TER. The only exception is at interval [0.5, 0.6), in which Model-I achieves the best TER score. This might be due to that the optimization criterion for MERT is BLEU rather than TER in our work. 4.4 Comparison with Previous Work In order to compare our proposed models with previous work, we re-implement two XMLMarkup approaches: (Koehn and Senellart, 2010) and (Ma et al, 2011), which are denoted as Koehn-10 and Ma-11, respectively. They are selected because they report superior performances in the literature. A brief description of them is as follows: 17 Source 如果0 禁用1 此2 策略3 设置4 ,5 internet6 explorer7 不8 搜索9 internet10 查找11 浏览器12 的13 新14 版本15 ,16 因此17 不18 会19 提示20 用户21 安装22 。23 Reference if0 you1 disable2 this3 policy4 setting5 ,6 internet7 explorer8 does9 not10 check11 the12 internet13 for14 new15 versions16 of17 the18 browser19 ,20 so21 does22 not23 prompt24 users25 to26 install27 them28 .29 TM Source 如果0 不1 配置2 此3 策略4 设置5 ,6 internet7 explorer8 不9 搜索10 internet11 查找12 浏览 器13 的14 新15 版本16 ,17 因此18 不19 会20 提示21 用户22 安装23 。24 TM Target if0 you1 do2 not3 configure4 this5 policy6 setting7 ,8 internet9 explorer10 does11 not12 check13 the14 internet15 for16 new17 versions18 of19 the20 browser21 ,22 so23 does24 not25 prompt26 users27 to28 install29 them30 .31 TM Alignment 0-0 1-3 2-4 3-5 4-6 5-7 6-8 7-9 8-10 9-11 11-15 13-21 14-19 15-17 16-18 17-22 18-23 19-24 21-26 22-27 23-29 24-31 SMT if you disable this policy setting , internet explorer does not prompt users to install internet for new versions of the browser . [Miss 7 target words: 9~12, 20~21, 28; Has one wrong permutation] Koehn-10 if you do you disable this policy setting , internet explorer does not check the internet for new versions of the browser , so does not prompt users to install them . [Insert two spurious target words] Ma-11 if you disable this policy setting , internet explorer does not prompt users to install internet for new versions of the browser . [Miss 7 target words: 9~12, 20~21, 28; Has one wrong permutation] Model-I if you disable this policy setting , internet explorer does not prompt users to install new versions of the browser , so does not check the internet . [Miss 2 target words: 14, 28; Has one wrong permutation] Model-II if you disable this policy setting , internet explorer does not prompt users to install new versions of the browser , so does not check the internet . [Miss 2 target words: 14, 28; Has one wrong permutation] Model-III if you disable this policy setting , internet explorer does not check the internet for new versions of the browser , so does not prompt users to install them . [Exactly the same as the reference] Figure 2: A Translation Example at Interval [0.9, 1.0] (with FMS=0.920) Koehn et al. (2010) first find out the unmatched parts between the given source sentence and TM source sentence. Afterwards, for each unmatched phrase in the TM source sentence, they replace its corresponding translation in the TM target sentence by the corresponding source phrase in the input sentence, and then mark the substitution part. After replacing the corresponding translations of all unmatched source phrases in the TM target sentence, an XML input sentence (with mixed TM target phrases and marked input source phrases) is thus obtained. The SMT decoder then only translates the unmatched/marked source phrases and gets the desired results. Therefore, the inserted parts in the TM target sentence are automatically included. They use fuzzy match score to determine whether the current sentence should be marked or not; and their experiments show that this method is only effective when the fuzzy match score is above 0.8. Ma et al. (2011) think fuzzy match score is not reliable and use a discriminative learning method to decide whether the current sentence should be marked or not. Another difference between Ma11 and Koehn-10 is how the XML input is constructed. In constructing the XML input sentence, Ma-11 replaces each matched source phrase in the given source sentence with the corresponding TM target phrase. Therefore, the inserted parts in the TM target sentence are not included. In Ma’s another paper (He et al., 2011), more linguistic features for discriminative learning are also added. In our work, we only re-implement the XMLMarkup method used in (He et al., 2011; Ma et al, 2011), but do not implement the discriminative learning method. This is because the features adopted in their discriminative learning are complicated and difficult to re-implement. However, the proposed Model-III even outperforms the upper bound of their methods, which will be discussed later. Table 3 and 4 give the translation results of Koehn-10 and Ma-11 (without the discriminator). Scores marked by “#” are significantly better (p < 0.05) than Koehn-10. Besides, the upper bound of (Ma et al, 2011) is also given in the tables, which is denoted as Ma-11-U. We calculate this 18 upper bound according to the method described in (Ma et al., 2011). Since He et al., (2011) only add more linguistic features to the discriminative learning method, the upper bound of (He et al., 2011) is still the same with (Ma et al., 2011); therefore, Ma-11-U applies for both cases. It is observed that Model-III significantly exceeds Koehn-10 at all intervals. More importantly, the proposed models achieve much better TER score than the TM system does at interval [0.9, 1.0), but Koehn-10 does not even exceed the TM system at this interval. Furthermore, Model-III is much better than Ma-11-U at most intervals. Therefore, it can be concluded that the proposed models outperform the pipeline approaches significantly. Figure 2 gives an example at interval [0.9, 1.0), which shows the difference among different system outputs. It can be seen that “you do” is redundant for Koehn-10, because they are insertions and thus are kept in the XML input. However, SMT system still inserts another “you”, regardless of “you do” has already existed. This problem does not occur at Ma-11, but it misses some words and adopts one wrong permutation. Besides, Model-I selects more right words than SMT does but still puts them in wrong positions due to ignoring TM reordering information. In this example, Model-II obtains the same results with Model-I because it also lacks reordering information. Last, since Model-III considers both TM content and TM position information, it gives a perfect translation. 5 Conclusion and Future Work Unlike the previous pipeline approaches, which directly merge TM phrases into the final translation result, we integrate TM information of each source phrase into the phrase-based SMT at decoding. In addition, all possible TM target phrases are kept and the proposed models select the best one during decoding via referring SMT information. Besides, the integrated model considers the probability information of both SMT and TM factors. The experiments show that the proposed Model-III outperforms both the TM and the SMT systems significantly (p < 0.05) in either BLEU or TER when fuzzy match score is above 0.4. Compared with the pure SMT system, Model-III achieves overall 3.48 BLEU points improvement and 2.62 TER points reduction on a Chinese– English TM database. Furthermore, Model-III significantly exceeds all previous pipeline approaches. Similar improvements are also observed on the Hansards parts of LDC2004T08 (not shown in this paper due to space limitation). Since no language-dependent feature is adopted, the proposed approaches can be easily adapted for other language pairs. Moreover, following the approaches of Koehn-10 and Ma-11 (to give a fair comparison), training data for SMT and TM are the same in the current experiments. However, the TM is expected to play an even more important role when the SMT training-set differs from the TM database, as additional phrase-pairs that are unseen in the SMT phrase table can be extracted from TM (which can then be dynamically added into the SMT phrase table at decoding time). Our another study has shown that the integrated model would be even more effective when the TM database and the SMT training data-set are from different corpora in the same domain (not shown in this paper). In addition, more source phrases can be matched if a set of high-FMS sentences, instead of only the sentence with the highest FMS, can be extracted and referred at the same time. And it could further raise the performance. Last, some related approaches (Smith and Clark, 2009; Phillips, 2011) combine SMT and example-based machine translation (EBMT) (Nagao, 1984). It would be also interesting to compare our integrated approach with that of theirs. Acknowledgments The research work has been funded by the HiTech Research and Development Program (“863” Program) of China under Grant No. 2011AA01A207, 2012AA011101, and 2012AA011102 and also supported by the Key Project of Knowledge Innovation Program of Chinese Academy of Sciences under Grant No.KGZD-EW-501. The authors would like to thank the anonymous reviewers for their insightful comments and suggestions. Our sincere thanks are also extended to Dr. Yanjun Ma and Dr. Yifan He for their valuable discussions during this study. References Michael Auli, Adam Lopez, Hieu Hoang and Philipp Koehn, 2009. A systematic analysis of translation model search spaces. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 224–232. 19 Timothy C. Bell, J.G. Cleary and Ian H. Witten, 1990. Text compression: Prentice Hall, Englewood Cliffs, NJ. Ergun Biçici and Marc Dymetman. 2008. Dynamic translation memory: using statistical machine translation to improve translation memory fuzzy matches. In Proceedings of the 9th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2008), pages 454–465. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology. Yifan He, Yanjun Ma, Josef van Genabith and Andy Way, 2010. Bridging SMT and TM with translation recommendation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 622–630. Yifan He, Yanjun Ma, Andy Way and Josef van Genabith. 2011. Rich linguistic features for translation memory-inspired consistent translation. In Proceedings of the Thirteenth Machine Translation Summit, pages 456–463. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 181–184. Katrin Kirchhoff, Jeff A. Bilmes and Kevin Duh. 2007. Factored language models tutorial. Technical report, Department of Electrical Engineering, University of Washington, Seattle, Washington, USA. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 388–395, Barcelona, Spain. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer and Ondřej Bojar. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the ACL 2007 Demo and Poster Sessions, pages 177–180. Philipp Koehn, Franz Josef Och and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 48–54. Philipp Koehn and Jean Senellart. 2010. Convergence of translation memory and statistical machine translation. In AMTA Workshop on MT Research and the Translation Industry, pages 21–31. Elina Lagoudaki. 2006. Translation memories survey 2006: Users’ perceptions around tm use. In Proceedings of the ASLIB International Conference Translating and the Computer 28, pages 1–29. Qi Li, Biing-Hwang Juang, Qiru Zhou, and Chin-Hui Lee. 2000. Automatic verbal information verification for user authentication. IEEE transactions on speech and audio processing, Vol. 8, No. 5, pages 1063–6676. Vladimir Iosifovich Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10 (8). pages 707– 710. Yanjun Ma, Yifan He, Andy Way and Josef van Genabith. 2011. Consistent translation using discriminative learning: a translation memory-inspired approach. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1239–1248, Portland, Oregon. Makoto Nagao, 1984. A framework of a mechanical translation between Japanese and English by analogy principle. In: Banerji, Alick Elithorn and Ranan (ed). Artifiical and Human Intelligence: Edited Review Papers Presented at the International NATO Symposium on Artificial and Human Intelligence. North-Holland, Amsterdam, 173–180. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29 (1). pages 19–51. Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311– 318. Aaron B. Phillips, 2011. Cunei: open-source machine translation with relevance-based models of each translation instance. Machine Translation, 25 (2). pages 166-177. Richard Sikes. 2007, Fuzzy matching in theory and practice. Multilingual, 18(6):39–43. Michel Simard and Pierre Isabelle. 2009. Phrasebased machine translation in a computer-assisted translation environment. In Proceedings of the Twelfth Machine Translation Summit (MT Summit XII), pages 120–127. James Smith and Stephen Clark. 2009. EBMT for SMT: a new EBMT-SMT hybrid. In Proceedings of the 3rd International Workshop on Example20 Based Machine Translation (EBMT'09), pages 3– 10, Dublin, Ireland. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas (AMTA-2006), pages 223–231. Andreas Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, pages 311–318. Guillaume Wisniewski, Alexandre Allauzen and François Yvon, 2010. Assessing phrase-based translation models with oracle decoding. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 933–943. Ian H. Witten and Timothy C. Bell. 1991. The zerofrequency problem: estimating the probabilities of novel events in adaptive test compression. IEEE Transactions on Information Theory, 37(4): 1085– 1094, July. Ventsislav Zhechev and Josef van Genabith. 2010. Seeding statistical machine translation with translation memory output through tree-based structural alignment. In Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation, pages 43–51. Andreas Zollmann, Ashish Venugopal, Franz Josef Och and Jay Ponte, 2008. A systematic comparison of phrase-based, hierarchical and syntaxaugmented statistical MT. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 1145–1152. 21
2013
2
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 196–206, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Fast and Robust Compressive Summarization with Dual Decomposition and Multi-Task Learning Miguel B. Almeida∗† Andr´e F. T. Martins∗† ∗Priberam Labs, Alameda D. Afonso Henriques, 41, 2o, 1000-123 Lisboa, Portugal †Instituto de Telecomunicac¸˜oes, Instituto Superior T´ecnico, 1049-001 Lisboa, Portugal {mba,atm}@priberam.pt Abstract We present a dual decomposition framework for multi-document summarization, using a model that jointly extracts and compresses sentences. Compared with previous work based on integer linear programming, our approach does not require external solvers, is significantly faster, and is modular in the three qualities a summary should have: conciseness, informativeness, and grammaticality. In addition, we propose a multi-task learning framework to take advantage of existing data for extractive summarization and sentence compression. Experiments in the TAC2008 dataset yield the highest published ROUGE scores to date, with runtimes that rival those of extractive summarizers. 1 Introduction Automatic text summarization is a seminal problem in information retrieval and natural language processing (Luhn, 1958; Baxendale, 1958; Edmundson, 1969). Today, with the overwhelming amount of information available on the Web, the demand for fast, robust, and scalable summarization systems is stronger than ever. Up to now, extractive systems have been the most popular in multi-document summarization. These systems produce a summary by extracting a representative set of sentences from the original documents (Kupiec et al., 1995; Carbonell and Goldstein, 1998; Radev et al., 2000; Gillick et al., 2008). This approach has obvious advantages: it reduces the search space by letting decisions be made for each sentence as a whole (avoiding finegrained text generation), and it ensures a grammatical summary, assuming the original sentences are well-formed. The typical trade-offs in these models (maximizing relevance, and penalizing redundancy) lead to submodular optimization problems (Lin and Bilmes, 2010), which are NP-hard but approximable through greedy algorithms; learning is possible with standard structured prediction algorithms (Sipos et al., 2012; Lin and Bilmes, 2012). Probabilistic models have also been proposed to capture the problem structure, such as determinantal point processes (Gillenwater et al., 2012). However, extractive systems are rather limited in the summaries they can produce. Long, partly relevant sentences tend not to appear in the summary, or to block the inclusion of other sentences. This has motivated research in compressive summarization (Lin, 2003; Zajic et al., 2006; Daum´e, 2006), where summaries are formed by compressed sentences (Knight and Marcu, 2000), not necessarily extracts. While promising results have been achieved by models that simultaneously extract and compress (Martins and Smith, 2009; Woodsend and Lapata, 2010; Berg-Kirkpatrick et al., 2011), there are still obstacles that need to be surmounted for these systems to enjoy wide adoption. All approaches above are based on integer linear programming (ILP), suffering from slow runtimes, when compared to extractive systems. For example, Woodsend and Lapata (2012) report 55 seconds on average to produce a summary; Berg-Kirkpatrick et al. (2011) report substantially faster runtimes, but fewer compressions are allowed. Having a compressive summarizer which is both fast and expressive remains an open problem. A second inconvenience of ILP-based approaches is that they do not exploit the modularity of the problem, since the declarative specification required by ILP solvers discards important structural information. For example, such solvers are unable to take advantage of efficient dynamic programming routines for sentence compression (McDonald, 2006). 196 This paper makes progress in two fronts: • We derive a dual decomposition framework for extractive and compressive summarization (§2– 3). Not only is this framework orders of magnitude more efficient than the ILP-based approaches, it also allows the three well-known metrics of summaries—conciseness, informativeness, and grammaticality—to be treated separately in a modular fashion (see Figure 1). We also contribute with a novel knapsack factor, along with a linear-time algorithm for the corresponding dual decomposition subproblem. • We propose multi-task learning (§4) as a principled way to train compressive summarizers, using auxiliary data for extractive summarization and sentence compression. To this end, we adapt the framework of Evgeniou and Pontil (2004) and Daum´e (2007) to train structured predictors that share some of their parts. Experiments on TAC data (§5) yield state-of-theart results, with runtimes similar to that of extractive systems. To our best knowledge, this had never been achieved by compressive summarizers. 2 Extractive Summarization In extractive summarization, we are given a set of sentences D := {s1, . . . , sN} belonging to one or more documents, and the goal is to extract a subset S ⊆D that conveys a good summary of D and whose total number of words does not exceed a prespecified budget B. We use an indicator vector y := ⟨yn⟩N n=1 to represent an extractive summary, where yn = 1 if sn ∈S, and yn = 0 otherwise. Let Ln be the number of words of the nth sentence. By designing a quality score function g : {0, 1}N →R, this can be cast as a global optimization problem with a knapsack constraint: maximize g(y) w.r.t. y ∈{0, 1}N s.t. PN n=1 Lnyn ≤B. (1) Intuitively, a good summary is one which selects sentences that individually convey “relevant” information, while collectively having small “redundancy.” This trade-off was explicitly modeled in early works through the notion of maximal marginal relevance (Carbonell and Goldstein, 1998; McDonald, 2007). An alternative are coverage-based models (§2.1; Filatova and Hatzivassiloglou, 2004; Yih et al., 2007; Gillick et al., 2008), which seek a set of sentences that covers as many diverse “concepts” as possible; redundancy is automatically penalized since redundant sentences cover fewer concepts. Both models can be framed under the framework of submodular optimization (Lin and Bilmes, 2010), leading to greedy algorithms that have approximation guarantees. However, extending these models to allow for sentence compression (as will be detailed in §3) breaks the diminishing returns property, making submodular optimization no longer applicable. 2.1 Coverage-Based Summarization Coverage-based extractive summarization can be formalized as follows. Let C(D) := {c1, . . . , cM} be a set of relevant concept types which are present in the original documents D.1 Let σm be a relevance score assigned to the mth concept, and let the set Im ⊆{1, . . . , N} contain the indices of the sentences in which this concept occurs. Then, the following quality score function is defined: g(y) = PM m=1 σmum(y), (2) where um(y) := W n∈Im yn is a Boolean function that indicates whether the mth concept is present in the summary. Plugging this into Eq. 1, one obtains the following Boolean optimization problem: maximize PM m=1 σmum w.r.t. y ∈{0, 1}N, u ∈{0, 1}M s.t. um = W n∈Im yn, ∀m ∈[M] PN n=1 Lnyn ≤B, (3) where we used the notation [M] := {1, . . . , M}. This can be converted into an ILP and addressed with off-the-shelf solvers (Gillick et al., 2008). A drawback of this approach is that solving an ILP exactly is NP-hard. Even though existing commercial solvers can solve most instances with a moderate speed, they still exhibit poor worst-case behaviour; this is exacerbated when there is the need to combine an extractive component with other modules, as in compressive summarization (§3). 1Previous work has modeled concepts as events (Filatova and Hatzivassiloglou, 2004), salient words (Lin and Bilmes, 2010), and word bigrams (Gillick et al., 2008). In the sequel, we assume concepts are word k-grams, but our model can handle other representations, such as phrases or predicateargument structures. 197 2.2 A Dual Decomposition Formulation We next describe how the problem in Eq. 3 can be addressed with dual decomposition, a class of optimization techniques that tackle the dual of combinatorial problems in a modular, extensible, and parallelizable manner (Komodakis et al., 2007; Rush et al., 2010). In particular, we employ alternating directions dual decomposition (AD3; Martins et al., 2011a, 2012) for solving a linear relaxation of Eq. 3. AD3 resembles the subgradientbased algorithm of Rush et al. (2010), but it enjoys a faster convergence rate. Both algorithms split the original problem into several components, and then iterate between solving independent local subproblems at each component and adjusting multipliers to promote an agreement.2 The difference between the two methods is that the AD3 local subproblems, instead of requiring the computation of a locally optimal configuration, require solving a local quadratic problem. Martins et al. (2011b) provided linear-time solutions for several logic constraints, with applications to syntax and frame-semantic parsing (Das et al., 2012). We will see that AD3 can also handle budget and knapsack constraints efficiently. To tackle Eq. 3 with dual decomposition, we split the coverage-based summarizer into the following M + 1 components (one per constraint): 1. For each of the M concepts in C(D), one component for imposing the logic constraint in Eq. 3. This corresponds to the OR-WITHOUTPUT factor described by Martins et al. (2011b); the AD3 subproblem for the mth factor can be solved in time O(|Im|). 2. Another component for the knapsack constraint. This corresponds to a (novel) KNAPSACK factor, whose AD3 subproblem is solvable in time O(N). The actual algorithm is described in the appendix (Algorithm 1).3 3 Compressive Summarization We now turn to compressive summarization, which does not limit the summary sentences to be verbatim extracts from the original documents; in2For details about dual decomposition and Lagrangian relaxation, see the recent tutorial by Rush and Collins (2012). 3The AD3 subproblem in this case corresponds to computing an Euclidean projection onto the knapsack polytope (Eq. 11). Others addressed the related, but much harder, integer quadratic knapsack problem (McDonald, 2007). stead, it allows the extraction of compressed sentences where some words can be deleted. Formally, let us express each sentence of D as a sequence of word tokens, sn := ⟨tn,ℓ⟩Ln ℓ=0, where tn,0 ≡$ is a dummy symbol. We represent a compression of sn as an indicator vector zn := ⟨zn,ℓ⟩Ln ℓ=0, where zn,ℓ= 1 if the ℓth word is included in the compression. By convention, the dummy symbol is included if and only if the remaining compression is non-empty. A compressive summary can then be represented by an indicator vector z which is the concatenation of N such vectors, z = ⟨z1, . . . , zN⟩; each position in this indicator vector is indexed by a sentence n ∈[N] and a word position ℓ∈{0} ∪[Ln]. Models for compressive summarization were proposed by Martins and Smith (2009) and BergKirkpatrick et al. (2011) by combining extraction and compression scores. Here, we follow the latter work, by combining a coverage score function g with sentence-level compression score functions h1, . . . , hN. This yields the decoding problem: maximize g(z) + PN n=1 hn(zn) w.r.t. zn ∈{0, 1}Ln, ∀n ∈[N] s.t. PN n=1 PLn ℓ=1 zn,ℓ≤B. (4) 3.1 Coverage Model We use a coverage function similar to Eq. 2, but taking a compressive summary z as argument: g(z) = PM m=1 σmum(z), (5) where we redefine um as follows. First, we parametrize each occurrence of the mth concept (assumed to be a k-gram) as a triple ⟨n, ℓs, ℓe⟩, where n indexes a sentence, ℓs indexes a start position within the sentence, and ℓe indexes the end position. We denote by Tm the set of triples representing all occurrences of the mth concept in the original text, and we associate an indicator variable zn,ℓs:ℓe to each member of this set. We then define um(z) via the following logic constraints: • A concept type is selected if some of its k-gram tokens are selected: um(y) := W ⟨n,ℓs,ℓe⟩∈Tm zn,ℓs:ℓe. (6) • A k-gram concept token is selected if all its words are selected: zn,ℓs:ℓe := Vℓe ℓ=ℓs zn,ℓ. (7) 198 Sentences $ The leader of moderate Kashmiri separatists warned Thursday that ... $ Talks with Kashmiri separatists began last year ... "Kashmiri separatists" Budget Concept tokens Concept type Figure 1: Components of our compressive summarizer. Factors depicted in blue belong to the compression model, and aim to enforce grammaticality. The logic factors in red form the coverage component. Finally, the budget factor, in green, is connected to the word nodes; it ensures that the summary fits the word limit. Shaded circles represent active variables while white circles represent inactive variables. We set concept scores as σm := w · Φcov(D, cm), where Φcov(D, cm) is a vector of features (described in §3.5) and w the corresponding weights. 3.2 Compression Model For the compression score function, we follow Martins and Smith (2009) and decompose it as a sum of local score functions ρn,ℓdefined on dependency arcs: hn(zn) := PLn ℓ=1 ρn,ℓ(zn,ℓ, zn,π(ℓ)), (8) where π(ℓ) denotes the index of the word which is the parent of the ℓth word in the dependency tree (by convention, the root of the tree is the dummy symbol). To model the event that an arc is “cut” by disconnecting a child from its head, we define arc-deletion scores ρn,ℓ(0, 1) := w · Φcomp(sn, ℓ, π(ℓ)), where Φcomp is a feature map, which is described in detail in §3.5. We set ρn,ℓ(0, 0) = ρn,ℓ(1, 1) = 0, and ρn,ℓ(1, 0) = −∞, to allow only the deletion of entire subtrees. A crucial fact is that one can maximize Eq. 8 efficiently with dynamic programming (using the Viterbi algorithm for trees); the total cost is linear in Ln. We will exploit this fact in the dual decomposition framework described next.4 3.3 A Dual Decomposition Formulation In previous work, the optimization problem in Eq. 4 was converted into an ILP and fed to an offthe-shelf solver (Martins and Smith, 2009; BergKirkpatrick et al., 2011; Woodsend and Lapata, 2012). Here, we employ the AD3 algorithm, in a 4The same framework can be readily adapted to other compression models that are efficiently decodable, such as the semi-Markov model of McDonald (2006), which would allow incorporating a language model for the compression. similar manner as described in §2, but with an additional component for the sentence compressor, and slight modifications in the other components. We have the following N + M + PM m=1 |Tm| + 1 components in total, illustrated in Figure 1: 1. For each of the N sentences, one component for the compression model. The AD3 quadratic subproblem for this factor can be addressed by solving a sequence of linear subproblems, as described by Martins et al. (2012). Each of these subproblems corresponds to maximizing an objective function of the same form as Eq. 8; this can be done in O(Ln) time with dynamic programming, as discussed in §3.2. 2. For each of the M concept types in C(D), one OR-WITH-OUTPUT factor for the logic constraint in Eq. 6. This is analogous to the one described for the extractive case. 3. For each k-gram concept token in Tm, one AND-WITH-OUTPUT factor that imposes the constraint in Eq. 7. This factor was described by Martins et al. (2011b) and its AD3 subproblem can be solved in time linear in k. 4. Another component linked to all the words imposing that at most B words can be selected; this is done via a BUDGET factor, a particular case of KNAPSACK. The runtime of this AD3 subproblem is linear in the number of words. In addition, we found it useful to add a second BUDGET factor limiting the number of sentences that can be selected to a prescribed value K. We set K = 6 in our experiments. 199 3.4 Rounding Strategy Recall that the problem in Eq. 4 is NP-hard, and that AD3 is solving a linear relaxation. While there are ways of wrapping AD3 in an exact search algorithm (Das et al., 2012), such strategies work best when the solution of the relaxation has few fractional components, which is typical of parsing and translation problems (Rush et al., 2010; Chang and Collins, 2011), and attractive networks (Taskar et al., 2004). Unfortunately, this is not the case in summarization, where concepts “compete” with each other for inclusion in the summary, leading to frustrated cycles. We chose instead to adopt a fast and simple rounding procedure for obtaining a summary from a fractional solution. The procedure works as follows. First, solve the LP relaxation using AD3, as described above. This yields a solution z∗, where each component lies in the unit interval [0, 1]. If these components are all integer, then we have a certificate that this is the optimal solution. Otherwise, we collect the K sentences with the highest values of z∗ n,0 (“posteriors” on sentences), and seek the feasible summary which is the closest (in Euclidean distance) to z∗, while only containing those sentences. This can be computed exactly in time O(B PK k=1 Lnk), through dynamic programming.5 3.5 Features and Hard Constraints As Berg-Kirkpatrick et al. (2011), we used stemmed word bigrams as concepts, to which we associate the following concept features (Φcov): indicators for document counts, features indicating if each of the words in the bigram is a stopword, the earliest position in a document each concept occurs, as well as two and three-way conjunctions of these features. For the compression model, we include the following arc-deletion features (Φcomp): • the dependency label of the arc being deleted, as well as its conjunction with the part-of-speech tag of the head, of the modifier, and of both; • the dependency labels of the arc being deleted and of its parent arc; • the modifier tag, if the modifier is a function word modifying a verb ; 5Briefly, if we link the roots of the K sentences to a superroot node, the problem above can be transformed into that of finding the best configuration in the resulting binary tree subject to a budget constraint. We omit details for space. • a feature indicating whether the modifier or any of its descendants is a negation word; • indicators of whether the modifier is a temporal word (e.g., Friday) or a preposition pointing to a temporal word (e.g., on Friday). In addition, we included hard constraints to prevent the deletion of certain arcs, following previous work in sentence compression (Clarke and Lapata, 2008). We never delete arcs whose dependency label is SUB, OBJ, PMOD, SBAR, VC, or PRD (this makes sure we preserve subjects and objects of verbs, arcs departing from prepositions or complementizers, and that we do not break verb chains or predicative complements); arcs linking to a conjunction word or siblings of such arcs (to prevent inconsistencies in handling coordinative conjunctions); arcs linking verbs to other verbs, to adjectives (e.g., make available), to verb particles (e.g., settle down), to the word that (e.g., said that), or to the word to if it is a leaf (e.g., allowed to come); arcs pointing to negation words, cardinal numbers, or determiners; and arcs connecting two proper nouns or words within quotation marks. 4 Multi-Task Learning We next turn to the problem of learning the model from training data. Prior work in compressive summarization has followed one of two strategies: Martins and Smith (2009) and Woodsend and Lapata (2012) learn the extraction and compression models separately, and then post-combine them, circumventing the lack of fully annotated data. Berg-Kirkpatrick et al. (2011) gathered a small dataset of manually compressed summaries, and trained with full supervision. While the latter approach is statistically more principled, it has the disadvantage of requiring fully annotated data, which is difficult to obtain in large quantities. On the other hand, there is plenty of data containing manually written abstracts (from the DUC and TAC conferences) and user-generated text (from Wikipedia) that may provide useful weak supervision. With this in mind, we put together a multi-task learning framework for compressive summarization (which we name task #1). The goal is to take advantage of existing data for related tasks, such as extractive summarization (task #2), and sentence compression (task #3). The three tasks are instances of structured predictors (Bakır et 200 Tasks Features Decoder Comp. summ. (#1) Φcov, Φcomp AD3 (solve Eq. 4) Extr. summ. (#2) Φcov AD3 (solve Eq. 3) Sent. comp. (#3) Φcomp dyn. prg. (max. Eq. 8) Table 1: Features and decoders used for each task. al., 2007), and for all of them we assume featurebased models that decompose over “parts”: • For the compressive summarization task, the parts correspond to concept features (§3.1) and to arc-deletion features (§3.2). • For the extractive summarization task, there are parts for concept features only. • For the sentence compression task, the parts correspond to arc-deletion features only. This is summarized in Table 1. Features for the three tasks are populated into feature vectors Φ1(x, y), Φ2(x, y), and Φ3(x, y), respectively, where ⟨x, y⟩denotes a task-specific input-output pair. We assume the feature vectors are all D dimensional, where we place zeros in entries corresponding to parts that are absent. Note that this setting is very general and applies to arbitrary structured prediction problems (not just summarization), the only assumption being that some parts are shared between different tasks. Next, we associate weight vectors v1, v2, v3 ∈ RD to each task, along with a “shared” vector w. Each task makes predictions according to the rule: by := arg max y (w + vk) · Φk(x, y), (9) where k ∈{1, 2, 3}. This setting is equivalent to the approach of Daum´e (2007) for domain adaptation, which consists in splitting each feature into task-component features and a shared feature; but here we do not duplicate features explicitly. To learn the weights, we regularize the weight vectors separately, and assume that each task has its own loss function Lk, so that the total loss L is a weighted sum L(w, v1, v2, v3) := P3 k=1 σkLk(w + vk). This yields the following objective function to be minimized: F(w, v1, v2, v3) = λ 2 ∥w∥2 + 3 X k=1 λk 2 ∥vk∥2 + 1 N 3 X k=1 σkLk(w + vk), (10) where λ and the λk’s are regularization constants, and N is the total number of training instances.6 In our experiments (§5), we let the Lk’s be structured hinge losses (Taskar et al., 2003; Tsochantaridis et al., 2004), where the corresponding cost functions are concept recall (for task #2), precision of arc deletions (for task #3), and a combination thereof (for task #1).7 These losses were normalized, and we set σk = N/Nk, where Nk is the number of training instances for the kth task. This ensures all tasks are weighted evenly. We used the same rationale to set λ = λ1 = λ2 = λ3, choosing this value through cross-validation in the dev set. We optimize Eq. 10 with stochastic subgradient descent. This leads to update rules of the form w ← (1 −ηtλ)w −ηtσk ˜∇Lk(w + vk) vj ← (1 −ηtλj)vj −ηtδjkσk ˜∇Lk(w + vk), where ˜∇Lk are stochastic subgradients for the kth task, that take only a single instance into account, and δjk = 1 if and only if j = k. Stochastic subgradients can be computed via cost-augmented decoding (see footnote 7). Interestingly, Eq. 10 subsumes previous approaches to train compressive summarizers. The limit λ →∞(keeping the λk’s fixed) forces w → 0, decoupling all the tasks. In this limit, inference for task #1 (compressive summarization) is based solely on the model learned from that task’s data, recovering the approach of Berg-Kirkpatrick et al. (2011). In the other extreme, setting σ1 = 0 simply ignores task #1’s training data. As a result, the optimal v1 will be a vector of zeros; since tasks #2 and #3 have no parts in common, the objective will decouple into a sum of two independent terms 6Note that, by substituting uk := w + vk and solving for w, the problem in Eq. 10 becomes that of minimizing the sum of the losses with a penalty for the (weighted) variance of the vectors {0, u1, u2, u3}, regularizing the difference towards their average, as in Evgeniou and Pontil (2004). This is similar to the hierarchical joint learning approach of Finkel and Manning (2010), except that our goal is to learn a new task (compressive summarization) instead of combining tasks. 7Let Yk denote the output set for the kth task. Given a task-specific cost function ∆k : Yk × Yk → R, and letting ⟨xt, yt⟩T t=1 be the labeled dataset for this task, the structured hinge loss takes the form Lk(uk) := P tmaxy′∈Yk(uk · (Φk(xt, y′) −Φk(xt, yt)) + ∆k(y′, yt)). The inner maximization over y′ is called the cost-augmented decoding problem: it differs from Eq. 9 by the inclusion of the cost term ∆k(y′, yt). Our costs decompose over the model’s factors, hence any decoder for Eq. 9 can be used for the maximization above: for tasks #1–#2, we solve a relaxation by running AD3 without rounding, and for task #3 we use dynamic programming; see Table 1. 201 involving v2 and v3, which is equivalent to training the two tasks separately and post-combining the models, as Martins and Smith (2009) did. 5 Experiments 5.1 Experimental setup We evaluated our compressive summarizers on data from the Text Analysis Conference (TAC) evaluations. We use the same splits as previous work (Berg-Kirkpatrick et al., 2011; Woodsend and Lapata, 2012): the non-update portions of TAC-2009 for training and TAC-2008 for testing. In addition, we reserved TAC-2010 as a devset. The test partition contains 48 multi-document summarization problems; each provides 10 related news articles as input, and asks for a summary with up to 100 words, which is evaluated against four manually written abstracts. We ignored all the query information present in the TAC datasets. Single-Task Learning. In the single-task experiments, we trained a compressive summarizer on the dataset disclosed by Berg-Kirkpatrick et al. (2011), which contains manual compressive summaries for the TAC-2009 data. We trained a structured SVM with stochastic subgradient descent; the cost-augmented inference problems are relaxed and solved with AD3, as described in §3.3.8 We followed the procedure described in BergKirkpatrick et al. (2011) to reduce the number of candidate sentences: scores were defined for each sentence (the sum of the scores of the concepts they cover), and the best-scored sentences were greedily selected up to a limit of 1,000 words. We then tagged and parsed the selected sentences with TurboParser.9 Our choice of a dependency parser was motivated by our will for a fast system; in particular, TurboParser attains top accuracies at a rate of 1,200 words per second, keeping parsing times below 1 second for each summarization problem. Multi-Task Learning. For the multi-task experiments, we also used the dataset of BergKirkpatrick et al. (2011), but we augmented the training data with extractive summarization and sentence compression datasets, to help train the 8We use the AD3 implementation in http://www. ark.cs.cmu.edu/AD3, setting the maximum number of iterations to 200 at training time and 1000 at test time. We extended the code to handle the knapsack and budget factors; the modified code will be part of the next release (AD3 2.1). 9http://www.ark.cs.cmu.edu/TurboParser compressive summarizer. For extractive summarization, we used the DUC 2003 and 2004 datasets (a total of 80 multi-document summarization problems). We generated oracle extracts by maximizing bigram recall with respect to the manual abstracts, as described in Berg-Kirkpatrick et al. (2011). For sentence compression, we adapted the Simple English Wikipedia dataset of Woodsend and Lapata (2011), containing aligned sentences for 15,000 articles from the English and Simple English Wikipedias. We kept only the 4,481 sentence pairs corresponding to deletionbased compressions. 5.2 Results Table 2 shows the results. The top rows refer to three strong baselines: the ICSI-1 extractive coverage-based system of Gillick et al. (2008), which achieved the best ROUGE scores in the TAC-2008 evaluation; the compressive summarizer of Berg-Kirkpatrick et al. (2011), denoted BGK’11; and the multi-aspect compressive summarizer of Woodsend and Lapata (2012), denoted WL’12. All these systems require ILP solvers. The bottom rows show the results achieved by our implementation of a pure extractive system (similar to the learned extractive summarizer of Berg-Kirkpatrick et al., 2011); a system that postcombines extraction and compression components trained separately, as in Martins and Smith (2009); and our compressive summarizer trained as a single task, and in the multi-task setting. The ROUGE and Pyramid scores show that the compressive summarizers (when properly trained) yield considerable benefits in content coverage over extractive systems, confirming the results of Berg-Kirkpatrick et al. (2011). Comparing the two bottom rows, we see a clear benefit by training in the multi-task setting, with a consistent gain in both coverage and linguistic quality. Our ROUGE-2 score (12.30%) is, to our knowledge, the highest reported on the TAC-2008 dataset, with little harm in grammaticality with respect to an extractive system that preserves the original sentences. Figure 2 shows an example summary. 5.3 Runtimes We conducted another set of experiments to compare the runtime of our compressive summarizer based on AD3 with the runtimes achieved by GLPK, the ILP solver used by Berg-Kirkpatrick et al. (2011). We varied the maximum number of it202 System R-2 R-SU4 Pyr LQ ICSI-1 11.03 13.96 34.5† – BGK’11 11.71 14.47 41.3† – WL’12 11.37 14.47 – – Extractive 11.16 14.07 36.0 4.6 Post-comb. 11.07 13.85 38.4 4.1 Single-task 11.88 14.86 41.0 3.8 Multi-task 12.30 15.18 42.6 4.2 Table 2: Results for compressive summarization. Shown are the ROUGE-2 and ROUGE SU-4 recalls with the default options from the ROUGE toolkit (Lin, 2004); Pyramid scores (Nenkova and Passonneau, 2004); and linguistic quality scores, scored between 1 (very bad) to 5 (very good). For Pyramid, the evaluation was performed by two annotators, each evaluating half of the problems; scores marked with † were computed by different annotators and are not directly comparable. Linguistic quality was evaluated by two linguists; we show the average of the reported scores. Solver Runtime (sec.) ROUGE-2 ILP Exact 10.394 12.40 LP-Relax. 2.265 12.38 AD3-5000 0.952 12.38 AD3-1000 0.406 12.30 AD3-200 0.159 12.15 Extractive (ILP) 0.265 11.16 Table 3: Runtimes of several decoders on a Intel Core i7 processor @2.8 GHz, with 8GB RAM. For each decoder, we show the average time taken to solve a summarization problem in TAC-2008. The reported runtimes of AD3 and LP-Relax include the time taken to round the solution (§3.4), which is 0.029 seconds on average. erations of AD3 in {200, 1000, 5000}, and clocked the time spent by GLPK to solve the exact ILPs and their relaxations. Table 3 depicts the results.10 We see that our proposed configuration (AD31000) is orders of magnitude faster than the ILP solver, and 5 times faster than its relaxed variant, while keeping similar accuracy levels.11 The gain when the number of iterations in AD3 is increased to 5000 is small, given that the runtime is more 10Within dual decomposition algorithms, we verified experimentally that AD3 is substantially faster than the subgradient algorithm, which is consistent with previous findings (Martins et al., 2011b). 11The runtimes obtained with the exact ILP solver seem slower than those reported by Berg-Kirkpatrick et al. (2011). (around 1.5 sec. on average, according to their Fig. 3). We conjecture that this difference is due to the restricted set of subtrees that can be deleted by Berg-Kirkpatrick et al. (2011), which greatly reduces their search space. Japan dispatched four military ships to help Russia rescue seven crew members aboard a small submarine trapped on the seabed in the Far East. The Russian Pacific Fleet said the crew had 120 hours of oxygen reserves on board when the submarine submerged at midday Thursday (2300 GMT Wednesday) off the Kamchatka peninsula, the stretch of Far Eastern Russia facing the Bering Sea. The submarine, used in rescue, research and intelligence-gathering missions, became stuck at the bottom of the Bay of Berezovaya off Russia’s Far East coast when its propeller was caught in a fishing net. The Russian submarine had been tending an underwater antenna mounted to the sea floor when it became snagged on a wire helping to stabilize a ventilation cable attached to the antenna. Rescue crews lowered a British remote-controlled underwater vehicle to a Russian mini-submarine trapped deep under the Pacific Ocean, hoping to free the vessel and its seven trapped crewmen before their air supply ran out. Figure 2: Example summary from our compressive system. Removed text is grayed out. than doubled; accuracy starts to suffer, however, if the number of iterations is reduced too much. In practice, we observed that the final rounding procedure was crucial, as only 2 out of the 48 test problems had integral solutions (arguably because of the “repulsive” nature of the network, as hinted in §3.4). For comparison, we also report in the bottom row the average runtime of the learned extractive baseline. We can see that our system’s runtime is competitive with this baseline. To our knowledge, this is the first time a compressive summarizer achieves such a favorable accuracy/speed tradeoff. 6 Conclusions We presented a multi-task learning framework for compressive summarization, leveraging data for related tasks in a principled manner. We decode with AD3, a fast and modular dual decomposition algorithm which is orders of magnitude faster than ILP-based approaches. Results show that the state of the art is improved in automatic and manual metrics, with speeds close to extractive systems. Our approach is modular and easy to extend. For example, a different compression model could incorporate rewriting rules to enable compressions that go beyond word deletion, as in Cohn and Lapata (2008). Other aspects may be added as additional components in our dual decomposition framework, such as query information (Schilder and Kondadadi, 2008), discourse con203 straints (Clarke and Lapata, 2007), or lexical preferences (Woodsend and Lapata, 2012). Our multitask approach may be used to jointly learn parameters for these aspects; the dual decomposition algorithm ensures that optimization remains tractable even with many components. A Projection Onto Knapsack This section describes a linear-time algorithm (Algorithm 1) for solving the following problem: minimize ∥z −a∥2 w.r.t. zn ∈[0, 1], ∀n ∈[N], s.t. PN n=1 Lnzn ≤B, (11) where a ∈RN and Ln ≥0, ∀n ∈[N]. This includes as special cases the problems of projecting onto a budget constraint (Ln = 1, ∀n) and onto the simplex (same, plus B = 1). Let clip(t) := max{0, min{1, t}}. Algorithm 1 starts by clipping a to the unit interval; if that yields a z satisfying PN n=1 Lnzn ≤B, we are done. Otherwise, the solution of Eq. 11 must satisfy PN n=1 Lnzn = B. It can be shown from the KKT conditions that the solution is of the form z∗ n := clip(an +τ ∗Ln) for a constant τ ∗lying in a particular interval of split-points (line 11). To seek this constant, we use an algorithm due to Pardalos and Kovoor (1990) which iteratively shrinks this interval. The algorithm requires computing medians as a subroutine, which can be done in linear time (Blum et al., 1973). The overall complexity in O(N) (Pardalos and Kovoor, 1990). Acknowledgments We thank all reviewers for their insightful comments; Trevor Cohn for helpful discussions about multi-task learning; Taylor Berg-Kirkpatrick for answering questions about their summarizer and for providing code; and Helena Figueira and Pedro Mendes for helping with manual evaluation. This work was partially supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Discooperio project (contract 2011/18501), and by a FCT grant PTDC/EEI-SII/2312/2012. References G. Bakır, T. Hofmann, B. Sch¨olkopf, A. Smola, B. Taskar, and S. Vishwanathan. 2007. Predicting Structured Data. The MIT Press. Algorithm 1 Projection Onto Knapsack. 1: input: a := ⟨an⟩N n=1, costs ⟨Ln⟩N n=1, maximum cost B 2: 3: {Try to clip into unit interval:} 4: Set zn ←clip(an) for n ∈[N] 5: if PN n=1 Lnzn ≤B then 6: Return z and stop. 7: end if 8: 9: {Run Pardalos and Kovoor (1990)’s algorithm:} 10: Initialize working set W ←{1, . . . , K} 11: Initialize set of split points: P ←{−an/Ln, (1 −an)/Ln}N n=1 ∪{±∞} 12: Initialize τL ←−∞, τR ←∞, stight ←0, ξ ←0. 13: while W ̸= ∅do 14: Compute τ ←Median(P) 15: Set s ←stight + ξτ + P n∈W Lnclip(an + τLn) 16: If s ≤B, set τL ←τ; if s ≥B, set τR ←τ 17: Reduce set of split points: P ←P ∩[τL, τR] 18: Define the sets: WL := {n ∈W | (1 −an)/Ln < τL} WR := {n ∈W | −an/Ln > τR} WM :=  n ∈W −an Ln ≤τL ∧1 −an Ln ≥τR  19: Update working set: W ←W \ (WL ∪WR ∪WM) 20: Update tight-sum: stight ←stight+P n∈WL Ln(1−an)−P n∈WR Lnan 21: Update slack-sum: ξ ←ξ + P n∈WM L2 n 22: end while 23: Define τ ∗←(B −PN i=1 Liai −stight)/ξ 24: Set zn ←clip(an + τ ∗Ln), ∀n ∈[N] 25: output: z := ⟨zn⟩N n=1. P. B. Baxendale. 1958. Machine-made index for technical literature—an experiment. IBM Journal of Research Development, 2(4):354–361. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proc. of Annual Meeting of the Association for Computational Linguistics. Manuel Blum, Robert W Floyd, Vaughan Pratt, Ronald L Rivest, and Robert E Tarjan. 1973. Time bounds for selection. Journal of Computer and System Sciences, 7(4):448–461. J. Carbonell and J. Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In SIGIR. Y.-W. Chang and M. Collins. 2011. Exact decoding of phrase-based translation models through lagrangian relaxation. In Proc. of Empirical Methods for Natural Language Processing. James Clarke and Mirella Lapata. 2007. Modelling compression with discourse constraints. In Proc. of Empirical Methods in Natural Language Processing. J. Clarke and M. Lapata. 2008. Global Inference for Sentence Compression An Integer Linear Program204 ming Approach. Journal of Artificial Intelligence Research, 31:399–429. T. Cohn and M. Lapata. 2008. Sentence compression beyond word deletion. In Proc. COLING. D. Das, A. F. T. Martins, and N. A. Smith. 2012. An Exact Dual Decomposition Algorithm for Shallow Semantic Parsing with Constraints. In Proc. of First Joint Conference on Lexical and Computational Semantics (*SEM). H. Daum´e. 2006. Practical Structured Learning Techniques for Natural Language Processing. Ph.D. thesis, University of Southern California. H. Daum´e. 2007. Frustratingly easy domain adaptation. In Proc. of Annual Meeting of the Association for Computational Linguistics. H. P. Edmundson. 1969. New methods in automatic extracting. Journal of the ACM, 16(2):264–285. T. Evgeniou and M. Pontil. 2004. Regularized multi– task learning. In Proc. of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 109–117. ACM. Elena Filatova and Vasileios Hatzivassiloglou. 2004. A formal model for information selection in multisentence text extraction. In Proc. of International Conference on Computational Linguistics. J.R. Finkel and C.D. Manning. 2010. Hierarchical joint learning: Improving joint parsing and named entity recognition with non-jointly labeled data. In Proc. of Annual Meeting of the Association for Computational Linguistics. J. Gillenwater, A. Kulesza, and B. Taskar. 2012. Discovering diverse and salient threads in document collections. In Proc. of Empirical Methods in Natural Language Processing. Dan Gillick, Benoit Favre, and Dilek Hakkani-Tur. 2008. The icsi summarization system at tac 2008. In Proc. of Text Understanding Conference. K. Knight and D. Marcu. 2000. Statistics-based summarization—step one: Sentence compression. In AAAI/IAAI. N. Komodakis, N. Paragios, and G. Tziritas. 2007. MRF optimization via dual decomposition: Message-passing revisited. In Proc. of International Conference on Computer Vision. J. Kupiec, J. Pedersen, and F. Chen. 1995. A trainable document summarizer. In SIGIR. H. Lin and J. Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In Proc. of Annual Meeting of the North American chapter of the Association for Computational Linguistics. H. Lin and J. Bilmes. 2012. Learning mixtures of submodular shells with application to document summarization. In Proc. of Uncertainty in Artificial Intelligence. C.-Y. Lin. 2003. Improving summarization performance by sentence compression-a pilot study. In the Int. Workshop on Inf. Ret. with Asian Languages. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Stan Szpakowicz Marie-Francine Moens, editor, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain, July. H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research Development, 2(2):159–165. A. F. T. Martins and N. A. Smith. 2009. Summarization with a Joint Model for Sentence Extraction and Compression. In North American Chapter of the Association for Computational Linguistics: Workshop on Integer Linear Programming for NLP. A. F. T. Martins, M. A. T. Figueiredo, P. M. Q. Aguiar, N. A. Smith, and E. P. Xing. 2011a. An Augmented Lagrangian Approach to Constrained MAP Inference. In Proc. of International Conference on Machine Learning. A. F. T. Martins, N. A. Smith, P. M. Q. Aguiar, and M. A. T. Figueiredo. 2011b. Dual Decomposition with Many Overlapping Components. In Proc. of Empirical Methods for Natural Language Processing. Andre F. T. Martins, Mario A. T. Figueiredo, Pedro M. Q. Aguiar, Noah A. Smith, and Eric P. Xing. 2012. Alternating Directions Dual Decomposition. Arxiv preprint arXiv:1212.6550. R. McDonald. 2006. Discriminative sentence compression with soft syntactic constraints. In Proc. of Annual Meeting of the European Chapter of the Association for Computational Linguistics. R. McDonald. 2007. A study of global inference algorithms in multi-document summarization. In ECIR. A. Nenkova and R. Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of NAACL, pages 145–152. Panos M. Pardalos and Naina Kovoor. 1990. An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Mathematical Programming, 46(1):321–328. D. R. Radev, H. Jing, and M. Budzikowska. 2000. Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies. In the NAACL-ANLP Workshop on Automatic Summarization. 205 A.M. Rush and M. Collins. 2012. A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing. Journal of Artificial Intelligence Research, 45:305–362. A. Rush, D. Sontag, M. Collins, and T. Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proc. of Empirical Methods for Natural Language Processing. Frank Schilder and Ravikumar Kondadadi. 2008. Fastsum: Fast and accurate query-based multi-document summarization. In Proc. of Annual Meeting of the Association for Computational Linguistics. R. Sipos, P. Shivaswamy, and T. Joachims. 2012. Large-margin learning of submodular summarization models. B. Taskar, C. Guestrin, and D. Koller. 2003. Maxmargin Markov networks. In Proc. of Neural Information Processing Systems. B. Taskar, V. Chatalbashev, and D. Koller. 2004. Learning associative Markov networks. In Proc. of International Conference of Machine Learning. I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proc. of International Conference of Machine Learning. K. Woodsend and M. Lapata. 2010. Automatic generation of story highlights. In Proc. of Annual Meeting of the Association for Computational Linguistics, pages 565–574. Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proc. of Empirical Methods in Natural Language Processing. Kristian Woodsend and Mirella Lapata. 2012. Multiple aspect summarization using integer linear programming. In Proc. of Empirical Methods in Natural Language Processing. Wen-tau Yih, Joshua Goodman, Lucy Vanderwende, and Hisami Suzuki. 2007. Multi-document summarization by maximizing informative content-words. In Proc. of International Joint Conference on Artifical Intelligence. D. Zajic, B. Dorr, J. Lin, and R. Schwartz. 2006. Sentence compression as a component of a multidocument summarization system. In the ACL DUC Workshop. 206
2013
20
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 207–217, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Unsupervised Transcription of Historical Documents Taylor Berg-Kirkpatrick Greg Durrett Dan Klein Computer Science Division University of California at Berkeley {tberg,gdurrett,klein}@cs.berkeley.edu Abstract We present a generative probabilistic model, inspired by historical printing processes, for transcribing images of documents from the printing press era. By jointly modeling the text of the document and the noisy (but regular) process of rendering glyphs, our unsupervised system is able to decipher font structure and more accurately transcribe images into text. Overall, our system substantially outperforms state-of-the-art solutions for this task, achieving a 31% relative reduction in word error rate over the leading commercial system for historical transcription, and a 47% relative reduction over Tesseract, Google’s open source OCR system. 1 Introduction Standard techniques for transcribing modern documents do not work well on historical ones. For example, even state-of-the-art OCR systems produce word error rates of over 50% on the documents shown in Figure 1. Unsurprisingly, such error rates are too high for many research projects (Arlitsch and Herbert, 2004; Shoemaker, 2005; Holley, 2010). We present a new, generative model specialized to transcribing printing-press era documents. Our model is inspired by the underlying printing processes and is designed to capture the primary sources of variation and noise. One key challenge is that the fonts used in historical documents are not standard (Shoemaker, 2005). For example, consider Figure 1a. The fonts are not irregular like handwriting – each occurrence of a given character type, e.g. a, will use the same underlying glyph. However, the exact glyphs are unknown. Some differences between fonts are minor, reflecting small variations in font design. Others are more severe, like the presence of the archaic long s character before 1804. To address the general problem of unknown fonts, our model (a) (b) (c) Figure 1: Portions of historical documents with (a) unknown font, (b) uneven baseline, and (c) over-inking. learns the font in an unsupervised fashion. Font shape and character segmentation are tightly coupled, and so they are modeled jointly. A second challenge with historical data is that the early typesetting process was noisy. Handcarved blocks were somewhat uneven and often failed to sit evenly on the mechanical baseline. Figure 1b shows an example of the text’s baseline moving up and down, with varying gaps between characters. To deal with these phenomena, our model incorporates random variables that specifically describe variations in vertical offset and horizontal spacing. A third challenge is that the actual inking was also noisy. For example, in Figure 1c some characters are thick from over-inking while others are obscured by ink bleeds. To be robust to such rendering irregularities, our model captures both inking levels and pixel-level noise. Because the model is generative, we can also treat areas that are obscured by larger ink blotches as unobserved, and let the model predict the obscured text based on visual and linguistic context. Our system, which we call Ocular, operates by fitting the model to each document in an unsupervised fashion. The system outperforms state-ofthe-art baselines, giving a 47% relative error reduction over Google’s open source Tesseract system, and giving a 31% relative error reduction over ABBYY’s commercial FineReader system, which has been used in large-scale historical transcription projects (Holley, 2010). 207 Over-inked It appeared that the Prisoner was very E : X : Wandering baseline Historical font Figure 2: An example image from a historical document (X) and its transcription (E). 2 Related Work Relatively little prior work has built models specifically for transcribing historical documents. Some of the challenges involved have been addressed (Ho and Nagy, 2000; Huang et al., 2006; Kae and Learned-Miller, 2009), but not in a way targeted to documents from the printing press era. For example, some approaches have learned fonts in an unsupervised fashion but require pre-segmentation of the image into character or word regions (Ho and Nagy, 2000; Huang et al., 2006), which is not feasible for noisy historical documents. Kae and Learned-Miller (2009) jointly learn the font and image segmentation but do not outperform modern baselines. Work that has directly addressed historical documents has done so using a pipelined approach, and without fully integrating a strong language model (Vamvakas et al., 2008; Kluzner et al., 2009; Kae et al., 2010; Kluzner et al., 2011). The most comparable work is that of Kopec and Lomelin (1996) and Kopec et al. (2001). They integrated typesetting models with language models, but did not model noise. In the NLP community, generative models have been developed specifically for correcting outputs of OCR systems (Kolak et al., 2003), but these do not deal directly with images. A closely related area of work is automatic decipherment (Ravi and Knight, 2008; Snyder et al., 2010; Ravi and Knight, 2011; Berg-Kirkpatrick and Klein, 2011). The fundamental problem is similar to our own: we are presented with a sequence of symbols, and we need to learn a correspondence between symbols and letters. Our approach is also similar in that we use a strong language model (in conjunction with the constraint that the correspondence be regular) to learn the correct mapping. However, the symbols are not noisy in decipherment problems and in our problem we face a grid of pixels for which the segmentation into symbols is unknown. In contrast, decipherment typically deals only with discrete symbols. 3 Model Most historical documents have unknown fonts, noisy typesetting layouts, and inconsistent ink levels, usually simultaneously. For example, the portion of the document shown in Figure 2 has all three of these problems. Our model must handle them jointly. We take a generative modeling approach inspired by the overall structure of the historical printing process. Our model generates images of documents line by line; we present the generative process for the image of a single line. Our primary random variables are E (the text) and X (the pixels in an image of the line). Additionally, we have a random variable T that specifies the layout of the bounding boxes of the glyphs in the image, and a random variable R that specifies aspects of the inking and rendering process. The joint distribution is: P(E, T, R, X) = P(E) [Language model] · P(T|E) [Typesetting model] · P(R) [Inking model] · P(X|E, T, R) [Noise model] We let capital letters denote vectors of concatenated random variables, and we denote the individual random variables with lower-case letters. For example, E represents the entire sequence of text, while ei represents ith character in the sequence. 3.1 Language Model P(E) Our language model, P(E), is a Kneser-Ney smoothed character n-gram model (Kneser and Ney, 1995). We generate printed lines of text (rather than sentences) independently, without generating an explicit stop character. This means that, formally, the model must separately generate the character length of each line. We choose not to bias the model towards longer or shorter character sequences and let the line length m be drawn uniformly at random from the positive integers less than some large constant M.1 When i < 1, let ei denote a line-initial null character. We can now write: P(E) = P(m) · m Y i=1 P(ei|ei−1, . . . , ei−n) 1In particular, we do not use the kind of “word bonus” common to statistical machine translation models. 208 ei−1 ei+1 ei li gi ri X RPAD i X LPAD i X GLYPH i P( · | th) P( · | th) a b c ... z Offset: ✓VERT LM params cb b 1 30 1 5 1 5 a Glyph weights: φc Bounding box probs: Left pad width: ✓LPAD c Right pad width: ✓RPAD c Glyph width: ✓GLYPH c Font params a a a a aaa aaaaaaaaaaaaaaa a P( · | pe) Inking: ✓INK Inking params Figure 3: Character tokens ei are generated by the language model. For each token index i, a glyph bounding box width gi, left padding width li, and a right padding width ri, are generated. Finally, the pixels in each glyph bounding box X GLYPH i are generated conditioned on the corresponding character, while the pixels in left and right padding bounding boxes, X LPAD i and X RPAD i , are generated from a background distribution. 3.2 Typesetting Model P(T|E) Generally speaking, the process of typesetting produces a line of text by first tiling bounding boxes of various widths and then filling in the boxes with glyphs. Our generative model, which is depicted in Figure 3, reflects this process. As a first step, our model generates the dimensions of character bounding boxes; for each character token index i we generate three bounding box widths: a glyph box width gi, a left padding box width li, and a right padding box width ri, as shown in Figure 3. We let the pixel height of all lines be fixed to h. Let Ti = (li, gi, ri) so that Ti specifies the dimensions of the character box for token index i; T is then the concatenation of all Ti, denoting the full layout. Because the width of a glyph depends on its shape, and because of effects resulting from kerning and the use of ligatures, the components of each Ti are drawn conditioned on the character token ei. This means that, as part of our parameterization of the font, for each character type c we have vectors of multinomial parameters θLPAD c , θGLYPH c , and θRPAD c governing the distribution of the dimensions of character boxes of type c. These parameters are depicted on the right-hand side of Figure 3. We can now express the typesetting layout portion of the model as: P(T|E) = m Y i=1 P(Ti|ei) = m Y i=1  P(li; θ LPAD ei ) · P(gi; θ GLYPH ei ) · P(ri; θ RPAD ei )  Each character type c in our font has another set of parameters, a matrix φc. These are weights that specify the shape of the character type’s glyph, and are depicted in Figure 3 as part of the font parameters. φc will come into play when we begin generating pixels in Section 3.3. 3.2.1 Inking Model P(R) Before we start filling the character boxes with pixels, we need to specify some properties of the inking and rendering process, including the amount of ink used and vertical variation along the text baseline. Our model does this by generating, for each character token index i, a discrete value di that specifies the overall inking level in the character’s bounding box, and a discrete value vi that specifies the glyph’s vertical offset. These variations in the inking and typesetting process are mostly independent of character type. Thus, in 209 our model, their distributions are not characterspecific. There is one global set of multinomial parameters governing inking level (θINK), and another governing offset (θVERT); both are depicted on the left-hand side of Figure 3. Let Ri = (di, vi) and let R be the concatenation of all Ri so that we can express the inking model as: P(R) = m Y i=1 P(Ri) = m Y i=1  P(di; θ INK) · P(vi; θ VERT)  The di and vi variables are suppressed in Figure 3 to reduce clutter but are expressed in Figure 4, which depicts the process of rendering a glyph box. 3.3 Noise Model P(X|E, T, R) Now that we have generated a typesetting layout T and an inking context R, we have to actually generate each of the pixels in each of the character boxes, left padding boxes, and right padding boxes; the matrices that these groups of pixels comprise are denoted X GLYPH i , X LPAD i , and X RPAD i , respectively, and are depicted at the bottom of Figure 3. We assume that pixels are binary valued and sample their values independently from Bernoulli distributions.2 The probability of black (the Bernoulli parameter) depends on the type of pixel generated. All the pixels in a padding box have the same probability of black that depends only on the inking level of the box, di. Since we have already generated this value and the widths li and ri of each padding box, we have enough information to generate left and right padding pixel matrices X LPAD i and X RPAD i . The Bernoulli parameter of a pixel inside a glyph bounding box depends on the pixel’s location inside the box (as well as on di and vi, but for simplicity of exposition, we temporarily suppress this dependence) and on the model parameters governing glyph shape (for each character type c, the parameter matrix φc specifies the shape of the character’s glyph.) The process by which glyph pixels are generated is depicted in Figure 4. The dependence of glyph pixels on location complicates generation of the glyph pixel matrix X GLYPH i since the corresponding parameter matrix 2We could generate real-valued pixels with a different choice of noise distribution. } } } } } a a a a aa aaaaaaaaaaaaaaaaaaaaaaaaa aaaa a a a } Interpolate, apply logistic Sample pixels Choose width Choose offset Glyph weights gi di vi φei ✓PIXEL(j, k, gi, di, vi; φei) ⇥ X GLYPH i ⇤ jk ⇠Bernoulli Bernoulli parameters Pixel values Choose inking Figure 4: We generate the pixels for the character token ei by first sampling a glyph width gi, an inking level di, and a vertical offset vi. Then we interpolate the glyph weights φei and apply the logistic function to produce a matrix of Bernoulli parameters of width gi, inking di, and offset vi. θPIXEL(j, k, gi, di, vi; φei) is the Bernoulli parameter at row j and column k. Finally, we sample from each Bernoulli distribution to generate a matrix of pixel values, X GLYPH i . φei has some type-level width w which may differ from the current token-level width gi. Introducing distinct parameters for each possible width would yield a model that can learn completely different glyph shapes for slightly different widths of the same character. We, instead, need a parameterization that ties the shapes for different widths together, and at the same time allows mobility in the parameter space during learning. Our solution is to horizontally interpolate the weights of the shape parameter matrix φei down to a smaller set of columns matching the tokenlevel choice of glyph width gi. Thus, the typelevel matrix φei specifies the canonical shape of the glyph for character ei when it takes its maximum width w. After interpolating, we apply the logistic function to produce the individual Bernoulli parameters. If we let [X GLYPH i ]jk denote the value of the pixel at the jth row and kth column of the glyph pixel matrix X GLYPH i for token i, and let θPIXEL(j, k, gi; φei) denote the token-level 210 ✓PIXEL : Interpolate, apply logistic φc : Glyph weights Bernoulli params µ Figure 5: In order to produce Bernoulli parameter matrices θPIXEL of variable width, we interpolate over columns of φc with vectors µ, and apply the logistic function to each result. Bernoulli parameter for this pixel, we can write: [X GLYPH i ]jk ∼Bernoulli θ PIXEL(j, k, gi; φei)  The interpolation process for a single row is depicted in Figure 5. We define a constant interpolation vector µ(gi, k) that is specific to the glyph box width gi and glyph box column k. Each µ(gi, k) is shaped according to a Gaussian centered at the relative column position in φei. The glyph pixel Bernoulli parameters are defined as follows: θ PIXEL(j, k,gi; φei) = logistic  w X k′=1 h µ(gi, k)k′ · [φei]jk′ i The fact that the parameterization is log-linear will ensure that, during the unsupervised learning process, updating the shape parameters φc is simple and feasible. By varying the magnitude of µ we can change the level of smoothing in the logistic model and cause it to permit areas that are over-inked. This is the effect that di controls. By offsetting the rows of φc that we interpolate weights from, we change the vertical offset of the glyph, which is controlled by vi. The full pixel generation process is diagrammed in Figure 4, where the dependence of θPIXEL on di and vi is also represented. 4 Learning We use the EM algorithm (Dempster et al., 1977) to find the maximum-likelihood font parameters: φc, θLPAD c , θGLYPH c , and θRPAD c . The image X is the only observed random variable in our model. The identities of the characters E the typesetting layout T and the inking R will all be unobserved. We do not learn θINK and θVERT, which are set to the uniform distribution. 4.1 Expectation Maximization During the E-step we compute expected counts for E and T, but maximize over R, for which we compute hard counts. Our model is an instance of a hidden semi-Markov model (HSMM), and therefore the computation of marginals is tractable with the semi-Markov forward-backward algorithm (Levinson, 1986). During the M-step, we update the parameters θLPAD c , θRPAD c using the standard closed-form multinomial updates and use a specialized closedform update for θGLYPH c that enforces unimodality of the glyph width distribution.3 The glyph weights, φc, do not have a closed-form update. The noise model that φc parameterizes is a local log-linear model, so we follow the approach of Berg-Kirkpatrick et al. (2010) and use L-BFGS (Liu and Nocedal, 1989) to optimize the expected likelihood with respect to φc. 4.2 Coarse-to-Fine Learning and Inference The number of states in the dynamic programming lattice grows exponentially with the order of the language model (Jelinek, 1998; Koehn, 2004). As a result, inference can become slow when the language model order n is large. To remedy this, we take a coarse-to-fine approach to both learning and inference. On each iteration of EM, we perform two passes: a coarse pass using a low-order language model, and a fine pass using a high-order language model (Petrov et al., 2008; Zhang and Gildea, 2008). We use the marginals4 from the coarse pass to prune states from the dynamic program of the fine pass. In the early iterations of EM, our font parameters are still inaccurate, and to prune heavily based on such parameters would rule out correct analyses. Therefore, we gradually increase the aggressiveness of pruning over the course of EM. To ensure that each iteration takes approximately the same amount of computation, we also gradually increase the order of the fine pass, only reaching the full order n on the last iteration. To produce a decoding of the image into text, on the final iteration we run a Viterbi pass using the pruned fine model. 3We compute the weighted mean and weighted variance of the glyph width expected counts. We set θGLYPH c to be proportional to a discretized Gaussian with the computed mean and variance. This update is approximate in the sense that it does not necessarily find the unimodal multinomial that maximizes expected log-likelihood, but it works well in practice. 4In practice, we use max-marginals for pruning to ensure that there is still a valid path in the pruned lattice. 211 Old Bailey, 1725: Old Bailey, 1875: Trove, 1883: Trove, 1823: (a) (b) (c) (d) Figure 6: Portions of several documents from our test set representing a range of difficulties are displayed. On document (a), which exhibits noisy typesetting, our system achieves a word error rate (WER) of 25.2. Document (b) is cleaner in comparison, and on it we achieve a WER of 15.4. On document (c), which is also relatively clean, we achieve a WER of 12.5. On document (d), which is severely degraded, we achieve a WER of 70.0. 5 Data We perform experiments on two historical datasets consisting of images of documents printed between 1700 and 1900 in England and Australia. Examples from both datasets are displayed in Figure 6. 5.1 Old Bailey The first dataset comes from a large set of images of the proceedings of the Old Bailey, a criminal court in London, England (Shoemaker, 2005). The Old Bailey curatorial effort, after deciding that current OCR systems do not adequately handle 18th century fonts, manually transcribed the documents into text. We will use these manual transcriptions to evaluate the output of our system. From the Old Bailey proceedings, we extracted a set of 20 images, each consisting of 30 lines of text to use as our first test set. We picked 20 documents, printed in consecutive decades. The first document is from 1715 and the last is from 1905. We choose the first document in each of the corresponding years, choose a random page in the document, and extracted an image of the first 30 consecutive lines of text consisting of full sentences.5 The ten documents in the Old Bailey dataset that were printed before 1810 use the long s glyph, while the remaining ten do not. 5.2 Trove Our second dataset is taken from a collection of digitized Australian newspapers that were printed between the years of 1803 and 1954. This collection is called Trove, and is maintained by the the National Library of Australia (Holley, 2010). We extracted ten images from this collection in the same way that we extracted images from Old Bailey, but starting from the year 1803. We manually produced our own gold annotations for these ten images. Only the first document of Trove uses the long s glyph. 5.3 Pre-processing Many of the images in historical collections are bitonal (binary) as a result of how they were captured on microfilm for storage in the 1980s (Arlitsch and Herbert, 2004). This is part of the reason our model is designed to work directly with binarized images. For consistency, we binarized the images in our test sets that were not already binary by thresholding pixel values. Our model requires that the image be presegmented into lines of text. We automatically segment lines by training an HSMM over rows of pixels. After the lines are segmented, each line is resampled so that its vertical resolution is 30 pixels. The line extraction process also identifies pixels that are not located in central text regions, and are part of large connected components of ink, spanning multiple lines. The values of such pixels are treated as unobserved in the model since, more often than not, they are part of ink blotches. 5This ruled out portions of the document with extreme structural abnormalities, like title pages and lists. These might be interesting to model, but are not within the scope of this paper. 212 6 Experiments We evaluate our system by comparing our text recognition accuracy to that of two state-of-the-art systems. 6.1 Baselines Our first baseline is Google’s open source OCR system, Tesseract (Smith, 2007). Tesseract takes a pipelined approach to recognition. Before recognizing the text, the document is broken into lines, and each line is segmented into words. Then, Tesseract uses a classifier, aided by a wordunigram language model, to recognize whole words. Our second baseline, ABBYY FineReader 11 Professional Edition,6 is a state-of-the-art commercial OCR system. It is the OCR system that the National Library of Australia used to recognize the historical documents in Trove (Holley, 2010). 6.2 Evaluation We evaluate the output of our system and the baseline systems using two metrics: character error rate (CER) and word error rate (WER). Both these metrics are based on edit distance. CER is the edit distance between the predicted and gold transcriptions of the document, divided by the number of characters in the gold transcription. WER is the word-level edit distance (words, instead of characters, are treated as tokens) between predicted and gold transcriptions, divided by the number of words in the gold transcription. When computing WER, text is tokenized into words by splitting on whitespace. 6.3 Language Model We ran experiments using two different language models. The first language model was trained on the initial one million sentences of the New York Times (NYT) portion of the Gigaword corpus (Graff et al., 2007), which contains about 36 million words. This language model is out of domain for our experimental documents. To investigate the effects of using an in domain language model, we created a corpus composed of the manual annotations of all the documents in the Old Bailey proceedings, excluding those used in our test set. This corpus consists of approximately 32 million words. In all experiments we used a character n-gram order of six for the final Viterbi de6http://www.abbyy.com System CER WER Old Bailey Google Tesseract 29.6 54.8 ABBYY FineReader 15.1 40.0 Ocular w/ NYT (this work) 12.6 28.1 Ocular w/ OB (this work) 9.7 24.1 Trove Google Tesseract 37.5 59.3 ABBYY FineReader 22.9 49.2 Ocular w/ NYT (this work) 14.9 33.0 Table 1: We evaluate the predicted transcriptions in terms of both character error rate (CER) and word error rate (WER), and report macro-averages across documents. We compare with two baseline systems: Google’s open source OCR system, Tessearact, and a state-of-the-art commercial system, ABBYY FineReader. We refer to our system as Ocular w/ NYT and Ocular w/ OB, depending on whether NYT or Old Bailey is used to train the language model. coding pass and an order of three for all coarse passes. 6.4 Initialization and Tuning We used as a development set ten additional documents from the Old Bailey proceedings and five additional documents from Trove that were not part of our test set. On this data, we tuned the model’s hyperparameters7 and the parameters of the pruning schedule for our coarse-to-fine approach. In experiments we initialized θRPAD c and θLPAD c to be uniform, and initialized θGLYPH c and φc based on the standard modern fonts included with the Ubuntu Linux 12.04 distribution.8 For documents that use the long s glyph, we introduce a special character type for the non-word-final s, and initialize its parameters from a mixture of the modern f and | glyphs.9 7 Results and Analysis The results of our experiments are summarized in Table 1. We refer to our system as Ocular w/ NYT or Ocular w/ OB, depending on whether the language model was trained using NYT or Old Bailey, respectively. We compute macro-averages 7One of the hyperparameters we tune is the exponent of the language model. This balances the contributions of the language model and the typesetting model to the posterior (Och and Ney, 2004). 8http://www.ubuntu.com/ 9Following Berg-Kirkpatrick et al. (2010), we use a regularization term in the optimization of the log-linear model parameters φc during the M-step. Instead of regularizing towards zero, we regularize towards the initializer. This slightly improves performance on our development set and can be thought of as placing a prior on the glyph shape parameters. 213 (c) Trove, 1883: (b) Old Bailey, 1885: (a) Old Bailey, 1775: the prisoner at the bar. Jacob Lazarus and his taken ill and taken away – I remember how the murderers came to learn the nation in Predicted text: Predicted typesetting: Image: Predicted text: Predicted typesetting: Image: Predicted text: Predicted typesetting: Image: Figure 7: For each of these portions of test documents, the first line shows the transcription predicted by our model and the second line shows a representation of the learned typesetting layout. The grayscale glyphs show the Bernoulli pixel distributions learned by our model, while the padding regions are depicted in blue. The third line shows the input image. across documents from all years. Our system, using the NYT language model, achieves an average WER of 28.1 on Old Bailey and an average WER of 33.0 on Trove. This represents a substantial error reduction compared to both baseline systems. If we average over the documents in both Old Bailey and Trove, we find that Tesseract achieved an average WER of 56.3, ABBYY FineReader achieved an average WER of 43.1, and our system, using the NYT language model, achieved an average WER of 29.7. This means that while Tesseract incorrectly predicts more than half of the words in these documents, our system gets more than threequarters of them right. Overall, we achieve a relative reduction in WER of 47% compared to Tesseract and 31% compared to ABBYY FineReader. The baseline systems do not have special provisions for the long s glyph. In order to make sure the comparison is fair, we separately computed average WER on only the documents from after 1810 (which do no use the long s glyph). We found that using this evaluation our system actually acheives a larger relative reduction in WER: 50% compared to Tesseract and 35% compared to ABBYY FineReader. Finally, if we train the language model using the Old Bailey corpus instead of the NYT corpus, we see an average improvement of 4 WER on the Old Bailey test set. This means that the domain of the language model is important, but, the results are not affected drastically even when using a language model based on modern corpora (NYT). 7.1 Learned Typesetting Layout Figure 7 shows a representation of the typesetting layout learned by our model for portions of several Initializer 1700 1740 1780 1820 1860 1900 Figure 8: The central glyph is a representation of the initial model parameters for the glyph shape for g, and surrounding this are the learned parameters for documents from various years. test documents. For each portion of a test document, the first line shows the transcription predicted by our model, and the second line shows padding and glyph regions predicted by the model, where the grayscale glyphs represent the learned Bernoulli parameters for each pixel. The third line shows the input image. Figure 7a demonstrates a case where our model has effectively explained both the uneven baseline and over-inked glyphs by using the vertical offsets vi and inking variables di. In Figure 7b the model has used glyph widths gi and vertical offsets to explain the thinning of glyphs and falling baseline that occurred near the binding of the book. In separate experiments on the Old Bailey test set, using the NYT language model, we found that removing the vertical offset variables from the model increased WER by 22, and removing the inking variables increased WER by 16. This indicates that it is very important to model both these aspects of printing press rendering. 214 Figure 9: This Old Bailey document from 1719 has severe ink bleeding from the facing page. We annotated these blotches (in red) and treated the corresponding pixels as unobserved in the model. The layout shown is predicted by the model. Figure 7c shows the output of our system on a difficult document. Here, missing characters and ink blotches confuse the model, which picks something that is reasonable according to the language model, but incorrect. 7.2 Learned Fonts It is interesting to look at the fonts learned by our system, and track how historical fonts changed over time. Figure 8 shows several grayscale images representing the Bernoulli pixel probabilities for the most likely width of the glyph for g under various conditions. At the center is the representation of the initial parameter values, and surrounding this are the learned parameters for documents from various years. The learned shapes are visibly different from the initializer, which is essentially an average of modern fonts, and also vary across decades. We can ask to what extent learning the font structure actually improved our performance. If we turn off learning and just use the initial parameters to decode, WER increases by 8 on the Old Bailey test set when using the NYT language model. 7.3 Unobserved Ink Blotches As noted earlier, one strength of our generative model is that we can make the values of certain pixels unobserved in the model, and let inference fill them in. We conducted an additional experiment on a document from the Old Bailey proceedings that was printed in 1719. This document, a fragment of which is shown in Figure 9, has severe ink bleeding from the facing page. We manually annotated the ink blotches (shown in red), and made them unobserved in the model. The resulting typesetting layout learned by the model is also shown in Figure 9. The model correctly predicted most of the obscured words. Running the model with the manually specified unobserved pixels reduced the WER on this document from 58 to 19 when using the NYT language model. 7.4 Remaining Errors We performed error analysis on our development set by randomly choosing 100 word errors from the WER alignment and manually annotating them with relevant features. Specifically, for each word error we recorded whether or not the error contained punctuation (either in the predicted word or the gold word), whether the text in the corresponding portion of the original image was italicized, and whether the corresponding portion of the image exhibited over-inking, missing ink, or significant ink blotches. These last three feature types are subjective in nature but may still be informative. We found that 56% of errors were accompanied by over-inking, 50% of errors were accompanied by ink blotches, 42% of errors contained punctuation, 21% of errors showed missing ink, and 12% of errors contained text that was italicized in the original image. Our own subjective assessment indicates that many of these error features are in fact causal. More often than not, italicized text is incorrectly transcribed. In cases of extreme ink blotching, or large areas of missing ink, the system usually makes an error. 8 Conclusion We have demonstrated a model, based on the historical typesetting process, that effectively learns font structure in an unsupervised fashion to improve transcription of historical documents into text. The parameters of the learned fonts are interpretable, as are the predicted typesetting layouts. Our system achieves state-of-the-art results, significantly outperforming two state-of-the-art baseline systems. 215 References Kenning Arlitsch and John Herbert. 2004. Microfilm, paper, and OCR: Issues in newspaper digitization. the Utah digital newspapers program. Microform & Imaging Review. Taylor Berg-Kirkpatrick and Dan Klein. 2011. Simple effective decipherment via combinatorial optimization. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies:. Arthur Dempster, Nan Laird, and Donald Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2007. English Gigaword third edition. Linguistic Data Consortium, Catalog Number LDC2007T07. Tin Kam Ho and George Nagy. 2000. OCR with no shape training. In Proceedings of the 15th International Conference on Pattern Recognition. Rose Holley. 2010. Trove: Innovation in access to information in Australia. Ariadne. Gary Huang, Erik G Learned-Miller, and Andrew McCallum. 2006. Cryptogram decoding for optical character recognition. University of MassachusettsAmherst Technical Report. Fred Jelinek. 1998. Statistical methods for speech recognition. MIT press. Andrew Kae and Erik Learned-Miller. 2009. Learning on the fly: font-free approaches to difficult OCR problems. In Proceedings of the 2009 International Conference on Document Analysis and Recognition. Andrew Kae, Gary Huang, Carl Doersch, and Erik Learned-Miller. 2010. Improving state-of-theart OCR through high-precision document-specific modeling. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition. Vladimir Kluzner, Asaf Tzadok, Yuval Shimony, Eugene Walach, and Apostolos Antonacopoulos. 2009. Word-based adaptive OCR for historical books. In Proceedings of the 2009 International Conference on on Document Analysis and Recognition. Vladimir Kluzner, Asaf Tzadok, Dan Chevion, and Eugene Walach. 2011. Hybrid approach to adaptive OCR for historical books. In Proceedings of the 2011 International Conference on Document Analysis and Recognition. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. Machine translation: From real users to research. Okan Kolak, William Byrne, and Philip Resnik. 2003. A generative probabilistic OCR model for NLP applications. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Gary Kopec and Mauricio Lomelin. 1996. Documentspecific character template estimation. In Proceedings of the International Society for Optics and Photonics. Gary Kopec, Maya Said, and Kris Popat. 2001. Ngram language models for document image decoding. In Proceedings of Society of Photographic Instrumentation Engineers. Stephen Levinson. 1986. Continuously variable duration hidden Markov models for automatic speech recognition. Computer Speech & Language. Dong C Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical programming. Franz Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics. Slav Petrov, Aria Haghighi, and Dan Klein. 2008. Coarse-to-fine syntactic machine translation using language projections. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Sujith Ravi and Kevin Knight. 2008. Attacking decipherment problems optimally with low-order ngram models. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Sujith Ravi and Kevin Knight. 2011. Bayesian inference for Zodiac and other homophonic ciphers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Robert Shoemaker. 2005. Digital London: Creating a searchable web of interlinked sources on eighteenth century London. Electronic Library and Information Systems. Ray Smith. 2007. An overview of the tesseract ocr engine. In Proceedings of the Ninth International Conference on Document Analysis and Recognition. 216 Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Georgios Vamvakas, Basilios Gatos, Nikolaos Stamatopoulos, and Stavros Perantonis. 2008. A complete optical character recognition methodology for historical documents. In The Eighth IAPR International Workshop on Document Analysis Systems. Hao Zhang and Daniel Gildea. 2008. Efficient multipass decoding for synchronous context free grammars. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. 217
2013
21